arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,310.04921 | Crystal: Introspective Reasoners Reinforced with Self-Feedback | ['Jiacheng Liu', 'Ramakanth Pasunuru', 'Hannaneh Hajishirzi', 'Yejin Choi', 'Asli Celikyilmaz'] | ['cs.AI', 'cs.CL', 'cs.LG'] | Extensive work has shown that the performance and interpretability of
commonsense reasoning can be improved via knowledge-augmented reasoning
methods, where the knowledge that underpins the reasoning process is explicitly
verbalized and utilized. However, existing implementations, including
"chain-of-thought" and its v... | 2023-10-07T21:23:58Z | EMNLP 2023 main conference | null | null | null | null | null | null | null | null | null |
2,310.04928 | Large Language Models Only Pass Primary School Exams in Indonesia: A
Comprehensive Test on IndoMMLU | ['Fajri Koto', 'Nurul Aisyah', 'Haonan Li', 'Timothy Baldwin'] | ['cs.CL'] | Although large language models (LLMs) are often pre-trained on large-scale
multilingual texts, their reasoning abilities and real-world knowledge are
mainly evaluated based on English datasets. Assessing LLM capabilities beyond
English is increasingly vital but hindered due to the lack of suitable
datasets. In this wor... | 2023-10-07T21:49:38Z | Accepted at EMNLP 2023 | null | null | Large Language Models Only Pass Primary School Exams in Indonesia: A Comprehensive Test on IndoMMLU | ['Fajri Koto', 'Nurul Aisyah', 'Haonan Li', 'Timothy Baldwin'] | 2,023 | Conference on Empirical Methods in Natural Language Processing | 46 | 61 | ['Computer Science'] |
2,310.04945 | Balancing Specialized and General Skills in LLMs: The Impact of Modern
Tuning and Data Strategy | ['Zheng Zhang', 'Chen Zheng', 'Da Tang', 'Ke Sun', 'Yukun Ma', 'Yingtong Bu', 'Xun Zhou', 'Liang Zhao'] | ['cs.CL', 'cs.AI'] | This paper introduces a multifaceted methodology for fine-tuning and
evaluating large language models (LLMs) for specialized monetization tasks. The
goal is to balance general language proficiency with domain-specific skills.
The methodology has three main components: 1) Carefully blending in-domain and
general-purpose... | 2023-10-07T23:29:00Z | null | null | null | null | null | null | null | null | null | null |
2,310.04948 | TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series
Forecasting | ['Defu Cao', 'Furong Jia', 'Sercan O Arik', 'Tomas Pfister', 'Yixiang Zheng', 'Wen Ye', 'Yan Liu'] | ['cs.LG', 'cs.CL'] | The past decade has witnessed significant advances in time series modeling
with deep learning. While achieving state-of-the-art results, the
best-performing architectures vary highly across applications and domains.
Meanwhile, for natural language processing, the Generative Pre-trained
Transformer (GPT) has demonstrate... | 2023-10-08T00:02:25Z | Accepted by ICLR 2024. Camera Ready Version | null | null | TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting | ['Defu Cao', 'Furong Jia', 'Sercan Ö. Arik', 'Tomas Pfister', 'Yixiang Zheng', 'Wen Ye', 'Yan Liu'] | 2,023 | International Conference on Learning Representations | 138 | 73 | ['Computer Science'] |
2,310.05209 | Scaling Laws of RoPE-based Extrapolation | ['Xiaoran Liu', 'Hang Yan', 'Shuo Zhang', 'Chenxin An', 'Xipeng Qiu', 'Dahua Lin'] | ['cs.CL', 'cs.AI'] | The extrapolation capability of Large Language Models (LLMs) based on Rotary
Position Embedding is currently a topic of considerable interest. The
mainstream approach to addressing extrapolation with LLMs involves modifying
RoPE by replacing 10000, the rotary base of $\theta_n={10000}^{-2n/d}$ in the
original RoPE, wit... | 2023-10-08T15:50:36Z | 26 pages, 12 figures, Accepted by ICLR 2024 | null | null | null | null | null | null | null | null | null |
2,310.05344 | SteerLM: Attribute Conditioned SFT as an (User-Steerable) Alternative to
RLHF | ['Yi Dong', 'Zhilin Wang', 'Makesh Narsimhan Sreedhar', 'Xianchao Wu', 'Oleksii Kuchaiev'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Model alignment with human preferences is an essential step in making Large
Language Models (LLMs) helpful and consistent with human values. It typically
consists of supervised fine-tuning (SFT) and reinforcement learning from human
feedback (RLHF) stages. However, RLHF faces inherent limitations stemming from
a comple... | 2023-10-09T02:11:21Z | Findings of EMNLP 2023 | null | null | SteerLM: Attribute Conditioned SFT as an (User-Steerable) Alternative to RLHF | ['Yi Dong', 'Zhilin Wang', 'Makesh Narsimhan Sreedhar', 'Xianchao Wu', 'Oleksii Kuchaiev'] | 2,023 | Conference on Empirical Methods in Natural Language Processing | 73 | 47 | ['Computer Science'] |
2,310.0547 | Generative Judge for Evaluating Alignment | ['Junlong Li', 'Shichao Sun', 'Weizhe Yuan', 'Run-Ze Fan', 'Hai Zhao', 'Pengfei Liu'] | ['cs.CL', 'cs.AI'] | The rapid development of Large Language Models (LLMs) has substantially
expanded the range of tasks they can address. In the field of Natural Language
Processing (NLP), researchers have shifted their focus from conventional NLP
tasks (e.g., sequence tagging and parsing) towards tasks that revolve around
aligning with h... | 2023-10-09T07:27:15Z | Fix typos in Table 1 | null | null | null | null | null | null | null | null | null |
2,310.05506 | MuggleMath: Assessing the Impact of Query and Response Augmentation on
Math Reasoning | ['Chengpeng Li', 'Zheng Yuan', 'Hongyi Yuan', 'Guanting Dong', 'Keming Lu', 'Jiancan Wu', 'Chuanqi Tan', 'Xiang Wang', 'Chang Zhou'] | ['cs.CL', 'cs.AI', 'cs.LG'] | In math reasoning with large language models (LLMs), fine-tuning data
augmentation by query evolution and diverse reasoning paths is empirically
verified effective, profoundly narrowing the gap between open-sourced LLMs and
cutting-edge proprietary LLMs. In this paper, we conduct an investigation for
such data augmenta... | 2023-10-09T08:18:58Z | Accepted to ACL 2024 Main Conference | null | null | null | null | null | null | null | null | null |
2,310.05737 | Language Model Beats Diffusion -- Tokenizer is Key to Visual Generation | ['Lijun Yu', 'José Lezama', 'Nitesh B. Gundavarapu', 'Luca Versari', 'Kihyuk Sohn', 'David Minnen', 'Yong Cheng', 'Vighnesh Birodkar', 'Agrim Gupta', 'Xiuye Gu', 'Alexander G. Hauptmann', 'Boqing Gong', 'Ming-Hsuan Yang', 'Irfan Essa', 'David A. Ross', 'Lu Jiang'] | ['cs.CV', 'cs.AI', 'cs.MM'] | While Large Language Models (LLMs) are the dominant models for generative
tasks in language, they do not perform as well as diffusion models on image and
video generation. To effectively use LLMs for visual generation, one crucial
component is the visual tokenizer that maps pixel-space inputs to discrete
tokens appropr... | 2023-10-09T14:10:29Z | ICLR 2024 | null | null | Language Model Beats Diffusion -- Tokenizer is Key to Visual Generation | ['Lijun Yu', 'José Lezama', 'N. B. Gundavarapu', 'Luca Versari', 'Kihyuk Sohn', 'David C. Minnen', 'Yong Cheng', 'Agrim Gupta', 'Xiuye Gu', 'Alexander G. Hauptmann', 'Boqing Gong', 'Ming-Hsuan Yang', 'Irfan Essa', 'David A. Ross', 'Lu Jiang'] | 2,023 | null | 325 | 82 | ['Computer Science'] |
2,310.0591 | SALMON: Self-Alignment with Instructable Reward Models | ['Zhiqing Sun', 'Yikang Shen', 'Hongxin Zhang', 'Qinhong Zhou', 'Zhenfang Chen', 'David Cox', 'Yiming Yang', 'Chuang Gan'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to i... | 2023-10-09T17:56:53Z | Previous Title: SALMON: Self-Alignment with Principle-Following
Reward Models. Accepted to ICLR 2024. Project page:
https://github.com/IBM/SALMON | null | null | null | null | null | null | null | null | null |
2,310.05914 | NEFTune: Noisy Embeddings Improve Instruction Finetuning | ['Neel Jain', 'Ping-yeh Chiang', 'Yuxin Wen', 'John Kirchenbauer', 'Hong-Min Chu', 'Gowthami Somepalli', 'Brian R. Bartoldson', 'Bhavya Kailkhura', 'Avi Schwarzschild', 'Aniruddha Saha', 'Micah Goldblum', 'Jonas Geiping', 'Tom Goldstein'] | ['cs.CL', 'cs.LG'] | We show that language model finetuning can be improved, sometimes
dramatically, with a simple augmentation. NEFTune adds noise to the embedding
vectors during training. Standard finetuning of LLaMA-2-7B using Alpaca
achieves 29.79% on AlpacaEval, which rises to 64.69% using noisy embeddings.
NEFTune also improves over ... | 2023-10-09T17:58:34Z | 25 pages, Code is available on Github:
https://github.com/neelsjain/NEFTune | null | null | null | null | null | null | null | null | null |
2,310.06116 | OptiMUS: Optimization Modeling Using MIP Solvers and large language
models | ['Ali AhmadiTeshnizi', 'Wenzhi Gao', 'Madeleine Udell'] | ['cs.AI'] | Optimization problems are pervasive across various sectors, from
manufacturing and distribution to healthcare. However, most such problems are
still solved heuristically by hand rather than optimally by state-of-the-art
solvers, as the expertise required to formulate and solve these problems limits
the widespread adopt... | 2023-10-09T19:47:03Z | null | null | null | null | null | null | null | null | null | null |
2,310.06266 | CodeFuse-13B: A Pretrained Multi-lingual Code Large Language Model | ['Peng Di', 'Jianguo Li', 'Hang Yu', 'Wei Jiang', 'Wenting Cai', 'Yang Cao', 'Chaoyu Chen', 'Dajun Chen', 'Hongwei Chen', 'Liang Chen', 'Gang Fan', 'Jie Gong', 'Zi Gong', 'Wen Hu', 'Tingting Guo', 'Zhichao Lei', 'Ting Li', 'Zheng Li', 'Ming Liang', 'Cong Liao', 'Bingchang Liu', 'Jiachen Liu', 'Zhiwei Liu', 'Shaojun Lu'... | ['cs.SE', 'cs.AI', 'cs.LG'] | Code Large Language Models (Code LLMs) have gained significant attention in
the industry due to their wide applications in the full lifecycle of software
engineering. However, the effectiveness of existing models in understanding
non-English inputs for multi-lingual code-related tasks is still far from well
studied. Th... | 2023-10-10T02:38:44Z | Accepted by ICSE-SEIP 2024 | null | 10.1145/3639477.3639719 | CodeFuse-13B: A Pretrained Multi-Lingual Code Large Language Model | ['Peng Di', 'Jianguo Li', 'Hang Yu', 'Wei Jiang', 'Wenting Cai', 'Yang Cao', 'Chaoyu Chen', 'Dajun Chen', 'Hongwei Chen', 'Liang Chen', 'Gang Fan', 'Jie Gong', 'Zi Gong', 'Wen Hu', 'Tingting Guo', 'Zhichao Lei', 'Ting Li', 'Zheng Li', 'Ming Liang', 'Cong Liao', 'Bingchang Liu', 'Jiachen Liu', 'Zhiwei Liu', 'Shaojun Lu'... | 2,023 | 2024 IEEE/ACM 46th International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP) | 14 | 55 | ['Computer Science'] |
2,310.06434 | Whispering LLaMA: A Cross-Modal Generative Error Correction Framework
for Speech Recognition | ['Srijith Radhakrishnan', 'Chao-Han Huck Yang', 'Sumeer Ahmad Khan', 'Rohit Kumar', 'Narsis A. Kiani', 'David Gomez-Cabrero', 'Jesper N. Tegner'] | ['cs.CL', 'cs.AI', 'cs.MM', 'cs.SD', 'eess.AS'] | We introduce a new cross-modal fusion technique designed for generative error
correction in automatic speech recognition (ASR). Our methodology leverages
both acoustic information and external linguistic representations to generate
accurate speech transcription contexts. This marks a step towards a fresh
paradigm in ge... | 2023-10-10T09:04:33Z | Accepted to EMNLP 2023 as main paper. 10 pages. Revised math
notations. GitHub: https://github.com/Srijith-rkr/Whispering-LLaMA | null | null | null | null | null | null | null | null | null |
2,310.06474 | Multilingual Jailbreak Challenges in Large Language Models | ['Yue Deng', 'Wenxuan Zhang', 'Sinno Jialin Pan', 'Lidong Bing'] | ['cs.CL'] | While large language models (LLMs) exhibit remarkable capabilities across a
wide range of tasks, they pose potential safety concerns, such as the
``jailbreak'' problem, wherein malicious instructions can manipulate LLMs to
exhibit undesirable behavior. Although several preventive measures have been
developed to mitigat... | 2023-10-10T09:44:06Z | ICLR 2024 | null | null | null | null | null | null | null | null | null |
2,310.06694 | Sheared LLaMA: Accelerating Language Model Pre-training via Structured
Pruning | ['Mengzhou Xia', 'Tianyu Gao', 'Zhiyuan Zeng', 'Danqi Chen'] | ['cs.CL', 'cs.AI', 'cs.LG'] | The popularity of LLaMA (Touvron et al., 2023a;b) and other recently emerged
moderate-sized large language models (LLMs) highlights the potential of
building smaller yet powerful LLMs. Regardless, the cost of training such
models from scratch on trillions of tokens remains high. In this work, we study
structured prunin... | 2023-10-10T15:13:30Z | The code and models are available at
https://github.com/princeton-nlp/LLM-Shearing | null | null | null | null | null | null | null | null | null |
2,310.0677 | SWE-bench: Can Language Models Resolve Real-World GitHub Issues? | ['Carlos E. Jimenez', 'John Yang', 'Alexander Wettig', 'Shunyu Yao', 'Kexin Pei', 'Ofir Press', 'Karthik Narasimhan'] | ['cs.CL', 'cs.AI', 'cs.SE'] | Language models have outpaced our ability to evaluate them effectively, but
for their future development it is essential to study the frontier of their
capabilities. We find real-world software engineering to be a rich,
sustainable, and challenging testbed for evaluating the next generation of
language models. To this ... | 2023-10-10T16:47:29Z | Data, code, and leaderboard are available at https://www.swebench.com
ICLR 2024, https://openreview.net/forum?id=VTF8yNQM66 | null | null | null | null | null | null | null | null | null |
2,310.06786 | OpenWebMath: An Open Dataset of High-Quality Mathematical Web Text | ['Keiran Paster', 'Marco Dos Santos', 'Zhangir Azerbayev', 'Jimmy Ba'] | ['cs.AI', 'cs.CL', 'cs.LG'] | There is growing evidence that pretraining on high quality, carefully
thought-out tokens such as code or mathematics plays an important role in
improving the reasoning abilities of large language models. For example,
Minerva, a PaLM model finetuned on billions of tokens of mathematical documents
from arXiv and the web,... | 2023-10-10T16:57:28Z | null | null | null | null | null | null | null | null | null | null |
2,310.06825 | Mistral 7B | ['Albert Q. Jiang', 'Alexandre Sablayrolles', 'Arthur Mensch', 'Chris Bamford', 'Devendra Singh Chaplot', 'Diego de las Casas', 'Florian Bressand', 'Gianna Lengyel', 'Guillaume Lample', 'Lucile Saulnier', 'Lélio Renard Lavaud', 'Marie-Anne Lachaux', 'Pierre Stock', 'Teven Le Scao', 'Thibaut Lavril', 'Thomas Wang', 'Tim... | ['cs.CL', 'cs.AI', 'cs.LG'] | We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered
for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B
across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and
code generation. Our model leverages grouped-query attention (GQA) for faster
inferenc... | 2023-10-10T17:54:58Z | Models and code are available at
https://mistral.ai/news/announcing-mistral-7b/ | null | null | Mistral 7B | ['Albert Qiaochu Jiang', 'Alexandre Sablayrolles', 'A. Mensch', 'Chris Bamford', 'Devendra Singh Chaplot', 'Diego de Las Casas', 'Florian Bressand', 'Gianna Lengyel', 'Guillaume Lample', 'Lucile Saulnier', "L'elio Renard Lavaud", 'M. Lachaux', 'Pierre Stock', 'Teven Le Scao', 'Thibaut Lavril', 'Thomas Wang', 'Timothée ... | 2,023 | arXiv.org | 2,266 | 29 | ['Computer Science'] |
2,310.0683 | Lemur: Harmonizing Natural Language and Code for Language Agents | ['Yiheng Xu', 'Hongjin Su', 'Chen Xing', 'Boyu Mi', 'Qian Liu', 'Weijia Shi', 'Binyuan Hui', 'Fan Zhou', 'Yitao Liu', 'Tianbao Xie', 'Zhoujun Cheng', 'Siheng Zhao', 'Lingpeng Kong', 'Bailin Wang', 'Caiming Xiong', 'Tao Yu'] | ['cs.CL'] | We introduce Lemur and Lemur-Chat, openly accessible language models
optimized for both natural language and coding capabilities to serve as the
backbone of versatile language agents. The evolution from language chat models
to functional language agents demands that models not only master human
interaction, reasoning, ... | 2023-10-10T17:57:45Z | ICLR 2024 Spotlight; https://github.com/OpenLemur/Lemur | null | null | null | null | null | null | null | null | null |
2,310.06927 | Sparse Fine-tuning for Inference Acceleration of Large Language Models | ['Eldar Kurtic', 'Denis Kuznedelev', 'Elias Frantar', 'Michael Goin', 'Dan Alistarh'] | ['cs.CL', 'cs.AI'] | We consider the problem of accurate sparse fine-tuning of large language
models (LLMs), that is, fine-tuning pretrained LLMs on specialized tasks, while
inducing sparsity in their weights. On the accuracy side, we observe that
standard loss-based fine-tuning may fail to recover accuracy, especially at
high sparsities. ... | 2023-10-10T18:28:38Z | null | null | null | Sparse Fine-tuning for Inference Acceleration of Large Language Models | ['Eldar Kurtic', 'Denis Kuznedelev', 'Elias Frantar', 'M. Goin', 'Dan Alistarh'] | 2,023 | arXiv.org | 13 | 35 | ['Computer Science'] |
2,310.06987 | Catastrophic Jailbreak of Open-source LLMs via Exploiting Generation | ['Yangsibo Huang', 'Samyak Gupta', 'Mengzhou Xia', 'Kai Li', 'Danqi Chen'] | ['cs.CL', 'cs.AI', 'cs.CR'] | The rapid progress in open-source large language models (LLMs) is
significantly advancing AI development. Extensive efforts have been made before
model release to align their behavior with human values, with the primary goal
of ensuring their helpfulness and harmlessness. However, even carefully aligned
models can be m... | 2023-10-10T20:15:54Z | null | null | null | null | null | null | null | null | null | null |
2,310.0716 | LLark: A Multimodal Instruction-Following Language Model for Music | ['Josh Gardner', 'Simon Durand', 'Daniel Stoller', 'Rachel M. Bittner'] | ['cs.SD', 'cs.LG', 'eess.AS'] | Music has a unique and complex structure which is challenging for both expert
humans and existing AI systems to understand, and presents unique challenges
relative to other forms of audio. We present LLark, an instruction-tuned
multimodal model for \emph{music} understanding. We detail our process for
dataset creation,... | 2023-10-11T03:12:47Z | ICML camera-ready version | null | null | null | null | null | null | null | null | null |
2,310.07276 | BioT5: Enriching Cross-modal Integration in Biology with Chemical
Knowledge and Natural Language Associations | ['Qizhi Pei', 'Wei Zhang', 'Jinhua Zhu', 'Kehan Wu', 'Kaiyuan Gao', 'Lijun Wu', 'Yingce Xia', 'Rui Yan'] | ['cs.CL', 'cs.AI', 'cs.LG', 'q-bio.BM'] | Recent advancements in biological research leverage the integration of
molecules, proteins, and natural language to enhance drug discovery. However,
current models exhibit several limitations, such as the generation of invalid
molecular SMILES, underutilization of contextual information, and equal
treatment of structur... | 2023-10-11T07:57:08Z | Accepted by Empirical Methods in Natural Language Processing 2023
(EMNLP 2023) | null | null | BioT5: Enriching Cross-modal Integration in Biology with Chemical Knowledge and Natural Language Associations | ['Qizhi Pei', 'Wei Zhang', 'Jinhua Zhu', 'Kehan Wu', 'Kaiyuan Gao', 'Lijun Wu', 'Yingce Xia', 'Rui Yan'] | 2,023 | Conference on Empirical Methods in Natural Language Processing | 75 | 101 | ['Computer Science', 'Biology'] |
2,310.07321 | On the Impact of Cross-Domain Data on German Language Models | ['Amin Dada', 'Aokun Chen', 'Cheng Peng', 'Kaleb E Smith', 'Ahmad Idrissi-Yaghir', 'Constantin Marc Seibold', 'Jianning Li', 'Lars Heiliger', 'Xi Yang', 'Christoph M. Friedrich', 'Daniel Truhn', 'Jan Egger', 'Jiang Bian', 'Jens Kleesiek', 'Yonghui Wu'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Traditionally, large language models have been either trained on general web
crawls or domain-specific data. However, recent successes of generative large
language models, have shed light on the benefits of cross-domain datasets. To
examine the significance of prioritizing data diversity over quality, we
present a Germ... | 2023-10-11T09:09:55Z | 13 pages, 1 figure, accepted at Findings of the Association for
Computational Linguistics: EMNLP 2023 | null | null | null | null | null | null | null | null | null |
2,310.07338 | From Supervised to Generative: A Novel Paradigm for Tabular Deep
Learning with Large Language Models | ['Xumeng Wen', 'Han Zhang', 'Shun Zheng', 'Wei Xu', 'Jiang Bian'] | ['cs.LG'] | Tabular data is foundational to predictive modeling in various crucial
industries, including healthcare, finance, retail, sustainability, etc. Despite
the progress made in specialized models, there is an increasing demand for
universal models that can transfer knowledge, generalize from limited data, and
follow human i... | 2023-10-11T09:37:38Z | Accepted by KDD 2024 | null | null | From Supervised to Generative: A Novel Paradigm for Tabular Deep Learning with Large Language Models | ['Xumeng Wen', 'Han Zhang', 'Shun Zheng', 'Wei Xu', 'Jiang Bian'] | 2,023 | Knowledge Discovery and Data Mining | 23 | 45 | ['Computer Science'] |
2,310.07554 | Retrieve Anything To Augment Large Language Models | ['Peitian Zhang', 'Shitao Xiao', 'Zheng Liu', 'Zhicheng Dou', 'Jian-Yun Nie'] | ['cs.IR'] | Large language models (LLMs) face significant challenges stemming from their
inherent limitations in knowledge, memory, alignment, and action. These
challenges cannot be addressed by LLMs alone, but should rely on assistance
from the external world, such as knowledge base, memory store, demonstration
examples, and tool... | 2023-10-11T14:59:53Z | null | null | null | null | null | null | null | null | null | null |
2,310.07699 | VeCLIP: Improving CLIP Training via Visual-enriched Captions | ['Zhengfeng Lai', 'Haotian Zhang', 'Bowen Zhang', 'Wentao Wu', 'Haoping Bai', 'Aleksei Timofeev', 'Xianzhi Du', 'Zhe Gan', 'Jiulong Shan', 'Chen-Nee Chuah', 'Yinfei Yang', 'Meng Cao'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Large-scale web-crawled datasets are fundamental for the success of
pre-training vision-language models, such as CLIP. However, the inherent noise
and potential irrelevance of web-crawled AltTexts pose challenges in achieving
precise image-text alignment. Existing methods utilizing large language models
(LLMs) for capt... | 2023-10-11T17:49:13Z | CV/ML | null | null | null | null | null | null | null | null | null |
2,310.07713 | InstructRetro: Instruction Tuning post Retrieval-Augmented Pretraining | ['Boxin Wang', 'Wei Ping', 'Lawrence McAfee', 'Peng Xu', 'Bo Li', 'Mohammad Shoeybi', 'Bryan Catanzaro'] | ['cs.CL', 'cs.AI', 'cs.IR', 'cs.LG'] | Pretraining auto-regressive large language models~(LLMs) with retrieval
demonstrates better perplexity and factual accuracy by leveraging external
databases. However, the size of existing pretrained retrieval-augmented LLM is
still limited (e.g., Retro has 7.5B parameters), which limits the effectiveness
of instruction... | 2023-10-11T17:59:05Z | ICML 2024 | null | null | null | null | null | null | null | null | null |
2,310.07889 | LangNav: Language as a Perceptual Representation for Navigation | ['Bowen Pan', 'Rameswar Panda', 'SouYoung Jin', 'Rogerio Feris', 'Aude Oliva', 'Phillip Isola', 'Yoon Kim'] | ['cs.CV', 'cs.AI', 'cs.CL', 'cs.RO'] | We explore the use of language as a perceptual representation for
vision-and-language navigation (VLN), with a focus on low-data settings. Our
approach uses off-the-shelf vision systems for image captioning and object
detection to convert an agent's egocentric panoramic view at each time step
into natural language desc... | 2023-10-11T20:52:30Z | null | null | null | LangNav: Language as a Perceptual Representation for Navigation | ['Bowen Pan', 'Rameswar Panda', 'SouYoung Jin', 'Rogério Feris', 'Aude Oliva', 'Phillip Isola', 'Yoon Kim'] | 2,023 | NAACL-HLT | 21 | 67 | ['Computer Science'] |
2,310.08096 | ClimateBERT-NetZero: Detecting and Assessing Net Zero and Reduction
Targets | ['Tobias Schimanski', 'Julia Bingler', 'Camilla Hyslop', 'Mathias Kraus', 'Markus Leippold'] | ['cs.LG'] | Public and private actors struggle to assess the vast amounts of information
about sustainability commitments made by various institutions. To address this
problem, we create a novel tool for automatically detecting corporate,
national, and regional net zero and reduction targets in three steps. First, we
introduce an ... | 2023-10-12T07:43:27Z | null | null | null | ClimateBERT-NetZero: Detecting and Assessing Net Zero and Reduction Targets | ['Tobias Schimanski', 'J. Bingler', 'Camilla Hyslop', 'Mathias Kraus', 'Markus Leippold'] | 2,023 | Conference on Empirical Methods in Natural Language Processing | 23 | 23 | ['Computer Science'] |
2,310.08164 | Interpreting Learned Feedback Patterns in Large Language Models | ['Luke Marks', 'Amir Abdullah', 'Clement Neo', 'Rauno Arike', 'David Krueger', 'Philip Torr', 'Fazl Barez'] | ['cs.LG'] | Reinforcement learning from human feedback (RLHF) is widely used to train
large language models (LLMs). However, it is unclear whether LLMs accurately
learn the underlying preferences in human feedback data. We coin the term
\textit{Learned Feedback Pattern} (LFP) for patterns in an LLM's activations
learned during RLH... | 2023-10-12T09:36:03Z | 19 pages, 8 figures | null | null | Interpreting Learned Feedback Patterns in Large Language Models | ['Luke Marks', 'Amir Abdullah', 'Clement Neo', 'Rauno Arike', 'David Krueger', 'Philip H. S. Torr', 'Fazl Barez'] | 2,023 | Neural Information Processing Systems | 3 | 38 | ['Computer Science'] |
2,310.08166 | Ziya-Visual: Bilingual Large Vision-Language Model via Multi-Task
Instruction Tuning | ['Junyu Lu', 'Dixiang Zhang', 'Xiaojun Wu', 'Xinyu Gao', 'Ruyi Gan', 'Jiaxing Zhang', 'Yan Song', 'Pingjian Zhang'] | ['cs.CL'] | Recent advancements enlarge the capabilities of large language models (LLMs)
in zero-shot image-to-text generation and understanding by integrating
multi-modal inputs. However, such success is typically limited to English
scenarios due to the lack of large-scale and high-quality non-English
multi-modal resources, makin... | 2023-10-12T09:39:17Z | null | null | null | null | null | null | null | null | null | null |
2,310.08182 | XIMAGENET-12: An Explainable AI Benchmark Dataset for Model Robustness
Evaluation | ['Qiang Li', 'Dan Zhang', 'Shengzhao Lei', 'Xun Zhao', 'Porawit Kamnoedboon', 'WeiWei Li', 'Junhao Dong', 'Shuyan Li'] | ['cs.CV', 'cs.LG'] | Despite the promising performance of existing visual models on public
benchmarks, the critical assessment of their robustness for real-world
applications remains an ongoing challenge. To bridge this gap, we propose an
explainable visual dataset, XIMAGENET-12, to evaluate the robustness of visual
models. XIMAGENET-12 co... | 2023-10-12T10:17:40Z | Paper accepted by Synthetic Data for Computer Vision Workshop @ IEEE
CVPR 2024 | null | null | null | null | null | null | null | null | null |
2,310.08232 | Language Models are Universal Embedders | ['Xin Zhang', 'Zehan Li', 'Yanzhao Zhang', 'Dingkun Long', 'Pengjun Xie', 'Meishan Zhang', 'Min Zhang'] | ['cs.CL'] | In the large language model (LLM) revolution, embedding is a key component of
various systems, such as retrieving knowledge or memories for LLMs or building
content moderation filters. As such cases span from English to other natural or
programming languages, from retrieval to classification and beyond, it is
advantage... | 2023-10-12T11:25:46Z | XLLM Workshop, ACL 2025 | null | null | Language Models are Universal Embedders | ['Xin Zhang', 'Zehan Li', 'Yanzhao Zhang', 'Dingkun Long', 'Pengjun Xie', 'Meishan Zhang', 'Min Zhang'] | 2,023 | arXiv.org | 9 | 85 | ['Computer Science'] |
2,310.08278 | Lag-Llama: Towards Foundation Models for Probabilistic Time Series
Forecasting | ['Kashif Rasul', 'Arjun Ashok', 'Andrew Robert Williams', 'Hena Ghonia', 'Rishika Bhagwatkar', 'Arian Khorasani', 'Mohammad Javad Darvishi Bayazi', 'George Adamopoulos', 'Roland Riachi', 'Nadhir Hassen', 'Marin Biloš', 'Sahil Garg', 'Anderson Schneider', 'Nicolas Chapados', 'Alexandre Drouin', 'Valentina Zantedeschi', ... | ['cs.LG', 'cs.AI'] | Over the past years, foundation models have caused a paradigm shift in
machine learning due to their unprecedented capabilities for zero-shot and
few-shot generalization. However, despite the success of foundation models in
modalities such as natural language processing and computer vision, the
development of foundatio... | 2023-10-12T12:29:32Z | First two authors contributed equally. All data, models and code used
are open-source. GitHub:
https://github.com/time-series-foundation-models/lag-llama | null | null | Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting | ['Kashif Rasul', 'Arjun Ashok', 'Andrew Robert Williams', 'Arian Khorasani', 'George Adamopoulos', 'Rishika Bhagwatkar', 'Marin Bilovs', 'Hena Ghonia', 'N. Hassen', 'Anderson Schneider', 'Sahil Garg', 'Alexandre Drouin', 'Nicolas Chapados', 'Yuriy Nevmyvaka', 'I. Rish'] | 2,023 | null | 50 | 66 | ['Computer Science'] |
2,310.08319 | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | ['Xueguang Ma', 'Liang Wang', 'Nan Yang', 'Furu Wei', 'Jimmy Lin'] | ['cs.IR'] | The effectiveness of multi-stage text retrieval has been solidly demonstrated
since before the era of pre-trained language models. However, most existing
studies utilize models that predate recent advances in large language models
(LLMs). This study seeks to explore potential improvements that
state-of-the-art LLMs can... | 2023-10-12T13:32:35Z | null | null | null | Fine-Tuning LLaMA for Multi-Stage Text Retrieval | ['Xueguang Ma', 'Liang Wang', 'Nan Yang', 'Furu Wei', 'Jimmy Lin'] | 2,023 | Annual International ACM SIGIR Conference on Research and Development in Information Retrieval | 226 | 63 | ['Computer Science'] |
2,310.08348 | LightZero: A Unified Benchmark for Monte Carlo Tree Search in General
Sequential Decision Scenarios | ['Yazhe Niu', 'Yuan Pu', 'Zhenjie Yang', 'Xueyan Li', 'Tong Zhou', 'Jiyuan Ren', 'Shuai Hu', 'Hongsheng Li', 'Yu Liu'] | ['cs.LG'] | Building agents based on tree-search planning capabilities with learned
models has achieved remarkable success in classic decision-making problems,
such as Go and Atari. However, it has been deemed challenging or even
infeasible to extend Monte Carlo Tree Search (MCTS) based algorithms to diverse
real-world application... | 2023-10-12T14:18:09Z | NeurIPS 2023 Spotlight | null | null | LightZero: A Unified Benchmark for Monte Carlo Tree Search in General Sequential Decision Scenarios | ['Yazhe Niu', 'Yuan Pu', 'Zhenjie Yang', 'Xueyan Li', 'Tong Zhou', 'Jiyuan Ren', 'Shuai Hu', 'Hongsheng Li', 'Yu Liu'] | 2,023 | Neural Information Processing Systems | 15 | 80 | ['Computer Science'] |
2,310.08491 | Prometheus: Inducing Fine-grained Evaluation Capability in Language
Models | ['Seungone Kim', 'Jamin Shin', 'Yejin Cho', 'Joel Jang', 'Shayne Longpre', 'Hwaran Lee', 'Sangdoo Yun', 'Seongjin Shin', 'Sungdong Kim', 'James Thorne', 'Minjoon Seo'] | ['cs.CL', 'cs.LG'] | Recently, using a powerful proprietary Large Language Model (LLM) (e.g.,
GPT-4) as an evaluator for long-form responses has become the de facto
standard. However, for practitioners with large-scale evaluation tasks and
custom criteria in consideration (e.g., child-readability), using proprietary
LLMs as an evaluator is... | 2023-10-12T16:50:08Z | ICLR 2024 | null | null | Prometheus: Inducing Fine-grained Evaluation Capability in Language Models | ['Seungone Kim', 'Jamin Shin', 'Yejin Cho', 'Joel Jang', 'S. Longpre', 'Hwaran Lee', 'Sangdoo Yun', 'Seongjin Shin', 'Sungdong Kim', 'James Thorne', 'Minjoon Seo'] | 2,023 | International Conference on Learning Representations | 240 | 31 | ['Computer Science'] |
2,310.08588 | Octopus: Embodied Vision-Language Programmer from Environmental Feedback | ['Jingkang Yang', 'Yuhao Dong', 'Shuai Liu', 'Bo Li', 'Ziyue Wang', 'Chencheng Jiang', 'Haoran Tan', 'Jiamu Kang', 'Yuanhan Zhang', 'Kaiyang Zhou', 'Ziwei Liu'] | ['cs.CV', 'cs.AI', 'cs.LG', 'cs.RO'] | Large vision-language models (VLMs) have achieved substantial progress in
multimodal perception and reasoning. When integrated into an embodied agent,
existing embodied VLM works either output detailed action sequences at the
manipulation level or only provide plans at an abstract level, leaving a gap
between high-leve... | 2023-10-12T17:59:58Z | Project Page: https://choiszt.github.io/Octopus/, Codebase:
https://github.com/dongyh20/Octopus | null | null | null | null | null | null | null | null | null |
2,310.08659 | LoftQ: LoRA-Fine-Tuning-Aware Quantization for Large Language Models | ['Yixiao Li', 'Yifan Yu', 'Chen Liang', 'Pengcheng He', 'Nikos Karampatziakis', 'Weizhu Chen', 'Tuo Zhao'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Quantization is an indispensable technique for serving Large Language Models
(LLMs) and has recently found its way into LoRA fine-tuning. In this work we
focus on the scenario where quantization and LoRA fine-tuning are applied
together on a pre-trained model. In such cases it is common to observe a
consistent gap in t... | 2023-10-12T18:34:08Z | null | null | null | null | null | null | null | null | null | null |
2,310.08754 | Tokenizer Choice For LLM Training: Negligible or Crucial? | ['Mehdi Ali', 'Michael Fromm', 'Klaudia Thellmann', 'Richard Rutmann', 'Max Lübbering', 'Johannes Leveling', 'Katrin Klug', 'Jan Ebert', 'Niclas Doll', 'Jasper Schulze Buschhoff', 'Charvi Jain', 'Alexander Arno Weber', 'Lena Jurkschat', 'Hammam Abdelwahab', 'Chelsea John', 'Pedro Ortiz Suarez', 'Malte Ostendorff', 'Sam... | ['cs.LG'] | The recent success of Large Language Models (LLMs) has been predominantly
driven by curating the training dataset composition, scaling of model
architectures and dataset sizes and advancements in pretraining objectives,
leaving tokenizer influence as a blind spot. Shedding light on this
underexplored area, we conduct a... | 2023-10-12T22:44:19Z | null | null | null | Tokenizer Choice For LLM Training: Negligible or Crucial? | ['Mehdi Ali', 'Michael Fromm', 'Klaudia Thellmann', 'Richard Rutmann', 'Max Lübbering', 'Johannes Leveling', 'Katrin Klug', 'Jan Ebert', 'Niclas Doll', 'Jasper Schulze Buschhoff', 'Charvi Jain', 'Alexander Arno Weber', 'Lena Jurkschat', 'Hammam Abdelwahab', 'Chelsea John', 'Pedro Ortiz Suarez', 'Malte Ostendorff', 'Sam... | 2,023 | NAACL-HLT | 61 | 83 | ['Computer Science'] |
2,310.08864 | Open X-Embodiment: Robotic Learning Datasets and RT-X Models | ['Open X-Embodiment Collaboration', "Abby O'Neill", 'Abdul Rehman', 'Abhinav Gupta', 'Abhiram Maddukuri', 'Abhishek Gupta', 'Abhishek Padalkar', 'Abraham Lee', 'Acorn Pooley', 'Agrim Gupta', 'Ajay Mandlekar', 'Ajinkya Jain', 'Albert Tung', 'Alex Bewley', 'Alex Herzog', 'Alex Irpan', 'Alexander Khazatsky', 'Anant Rai', ... | ['cs.RO'] | Large, high-capacity models trained on diverse datasets have shown remarkable
successes on efficiently tackling downstream applications. In domains from NLP
to Computer Vision, this has led to a consolidation of pretrained models, with
general pretrained backbones serving as a starting point for many applications.
Can ... | 2023-10-13T05:20:40Z | Project website: https://robotics-transformer-x.github.io | null | null | Open X-Embodiment: Robotic Learning Datasets and RT-X Models | ['A. Padalkar', 'A. Pooley', 'Ajinkya Jain', 'Alex Bewley', 'Alex Herzog', 'A. Irpan', 'Alexander Khazatsky', 'Anant Rai', 'Anikait Singh', 'Anthony Brohan', 'A. Raffin', 'Ayzaan Wahid', 'Ben Burgess-Limerick', 'Beomjoon Kim', 'Bernhard Schölkopf', 'Brian Ichter', 'Cewu Lu', 'Charles Xu', 'Chelsea Finn', 'Chenfeng Xu',... | 2,023 | arXiv.org | 531 | 135 | ['Computer Science'] |
2,310.09017 | Dont Add, dont Miss: Effective Content Preserving Generation from
Pre-Selected Text Spans | ['Aviv Slobodkin', 'Avi Caciularu', 'Eran Hirsch', 'Ido Dagan'] | ['cs.CL'] | The recently introduced Controlled Text Reduction (CTR) task isolates the
text generation step within typical summarization-style tasks. It does so by
challenging models to generate coherent text conforming to pre-selected content
within the input text (``highlights''). This framing enables increased
modularity in summ... | 2023-10-13T11:28:02Z | EMNLP 2023, findings | null | null | Dont Add, dont Miss: Effective Content Preserving Generation from Pre-Selected Text Spans | ['Aviv Slobodkin', 'Avi Caciularu', 'Eran Hirsch', 'Ido Dagan'] | 2,023 | Conference on Empirical Methods in Natural Language Processing | 3 | 47 | ['Computer Science'] |
2,310.09141 | PuoBERTa: Training and evaluation of a curated language model for
Setswana | ['Vukosi Marivate', "Moseli Mots'Oehli", 'Valencia Wagner', 'Richard Lastrucci', 'Isheanesu Dzingirai'] | ['cs.CL'] | Natural language processing (NLP) has made significant progress for
well-resourced languages such as English but lagged behind for low-resource
languages like Setswana. This paper addresses this gap by presenting PuoBERTa,
a customised masked language model trained specifically for Setswana. We cover
how we collected, ... | 2023-10-13T14:33:02Z | Accepted for SACAIR 2023 | null | null | null | null | null | null | null | null | null |
2,310.09168 | Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through
Active Exploration | ['Fanqi Wan', 'Xinting Huang', 'Tao Yang', 'Xiaojun Quan', 'Wei Bi', 'Shuming Shi'] | ['cs.CL'] | Instruction-tuning can be substantially optimized through enhanced diversity,
resulting in models capable of handling a broader spectrum of tasks. However,
existing data employed for such tuning often exhibit an inadequate coverage of
individual domains, limiting the scope for nuanced comprehension and
interactions wit... | 2023-10-13T15:03:15Z | Accepted to EMNLP 2023 (Main Conference) | null | null | Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration | ['Fanqi Wan', 'Xinting Huang', 'Tao Yang', 'Xiaojun Quan', 'Wei Bi', 'Shuming Shi'] | 2,023 | Conference on Empirical Methods in Natural Language Processing | 21 | 44 | ['Computer Science'] |
2,310.09199 | PaLI-3 Vision Language Models: Smaller, Faster, Stronger | ['Xi Chen', 'Xiao Wang', 'Lucas Beyer', 'Alexander Kolesnikov', 'Jialin Wu', 'Paul Voigtlaender', 'Basil Mustafa', 'Sebastian Goodman', 'Ibrahim Alabdulmohsin', 'Piotr Padlewski', 'Daniel Salz', 'Xi Xiong', 'Daniel Vlasic', 'Filip Pavetic', 'Keran Rong', 'Tianli Yu', 'Daniel Keysers', 'Xiaohua Zhai', 'Radu Soricut'] | ['cs.CV'] | This paper presents PaLI-3, a smaller, faster, and stronger vision language
model (VLM) that compares favorably to similar models that are 10x larger. As
part of arriving at this strong performance, we compare Vision Transformer
(ViT) models pretrained using classification objectives to contrastively
(SigLIP) pretraine... | 2023-10-13T15:45:19Z | null | null | null | null | null | null | null | null | null | null |
2,310.09219 | "Kelly is a Warm Person, Joseph is a Role Model": Gender Biases in
LLM-Generated Reference Letters | ['Yixin Wan', 'George Pu', 'Jiao Sun', 'Aparna Garimella', 'Kai-Wei Chang', 'Nanyun Peng'] | ['cs.CL', 'cs.AI'] | Large Language Models (LLMs) have recently emerged as an effective tool to
assist individuals in writing various types of content, including professional
documents such as recommendation letters. Though bringing convenience, this
application also introduces unprecedented fairness concerns. Model-generated
reference let... | 2023-10-13T16:12:57Z | Accepted to EMNLP 2023 Findings | null | null | null | null | null | null | null | null | null |
2,310.09343 | Dialogue Chain-of-Thought Distillation for Commonsense-aware
Conversational Agents | ['Hyungjoo Chae', 'Yongho Song', 'Kai Tzu-iunn Ong', 'Taeyoon Kwon', 'Minjin Kim', 'Youngjae Yu', 'Dongha Lee', 'Dongyeop Kang', 'Jinyoung Yeo'] | ['cs.CL', 'cs.AI'] | Human-like chatbots necessitate the use of commonsense reasoning in order to
effectively comprehend and respond to implicit information present within
conversations. Achieving such coherence and informativeness in responses,
however, is a non-trivial task. Even for large language models (LLMs), the task
of identifying ... | 2023-10-13T18:17:23Z | 25 pages, 8 figures, Accepted to EMNLP 2023 | null | null | null | null | null | null | null | null | null |
2,310.09478 | MiniGPT-v2: large language model as a unified interface for
vision-language multi-task learning | ['Jun Chen', 'Deyao Zhu', 'Xiaoqian Shen', 'Xiang Li', 'Zechun Liu', 'Pengchuan Zhang', 'Raghuraman Krishnamoorthi', 'Vikas Chandra', 'Yunyang Xiong', 'Mohamed Elhoseiny'] | ['cs.CV'] | Large language models have shown their remarkable capabilities as a general
interface for various language-related applications. Motivated by this, we
target to build a unified interface for completing many vision-language tasks
including image description, visual question answering, and visual grounding,
among others.... | 2023-10-14T03:22:07Z | null | null | null | null | null | null | null | null | null | null |
2,310.09736 | Domain-Specific Language Model Post-Training for Indonesian Financial
NLP | ['Ni Putu Intan Maharani', 'Yoga Yustiawan', 'Fauzy Caesar Rochim', 'Ayu Purwarianti'] | ['cs.CL', 'cs.AI'] | BERT and IndoBERT have achieved impressive performance in several NLP tasks.
There has been several investigation on its adaption in specialized domains
especially for English language. We focus on financial domain and Indonesian
language, where we perform post-training on pre-trained IndoBERT for financial
domain usin... | 2023-10-15T05:07:08Z | Accepted in ICEEI 2023 (International Conference on Electrical
Engineering and Informatics 2023) | null | null | null | null | null | null | null | null | null |
2,310.09765 | MILPaC: A Novel Benchmark for Evaluating Translation of Legal Text to
Indian Languages | ['Sayan Mahapatra', 'Debtanu Datta', 'Shubham Soni', 'Adrijit Goswami', 'Saptarshi Ghosh'] | ['cs.CL', 'cs.AI'] | Most legal text in the Indian judiciary is written in complex English due to
historical reasons. However, only a small fraction of the Indian population is
comfortable in reading English. Hence legal text needs to be made available in
various Indian languages, possibly by translating the available legal text from
Engli... | 2023-10-15T07:49:56Z | To be published in ACM Transactions on Asian and Low-Resource
Language Information Processing (TALLIP) | null | null | MILPaC: A Novel Benchmark for Evaluating Translation of Legal Text to Indian Languages | ['Sayan Mahapatra', 'Debtanu Datta', 'Shubham Soni', 'A. Goswami', 'Saptarshi Ghosh'] | 2,023 | null | 2 | 27 | ['Computer Science'] |
2,310.10083 | JMedLoRA:Medical Domain Adaptation on Japanese Large Language Models
using Instruction-tuning | ['Issey Sukeda', 'Masahiro Suzuki', 'Hiroki Sakaji', 'Satoshi Kodera'] | ['cs.CL'] | In the ongoing wave of impact driven by large language models (LLMs) like
ChatGPT, the adaptation of LLMs to medical domain has emerged as a crucial
research frontier. Since mainstream LLMs tend to be designed for
general-purpose applications, constructing a medical LLM through domain
adaptation is a huge challenge. Wh... | 2023-10-16T05:28:28Z | 8 pages, 1 figures | null | null | null | null | null | null | null | null | null |
2,310.10118 | Learning to Rank Context for Named Entity Recognition Using a Synthetic
Dataset | ['Arthur Amalvy', 'Vincent Labatut', 'Richard Dufour'] | ['cs.CL'] | While recent pre-trained transformer-based models can perform named entity
recognition (NER) with great accuracy, their limited range remains an issue
when applied to long documents such as whole novels. To alleviate this issue, a
solution is to retrieve relevant context at the document level. Unfortunately,
the lack o... | 2023-10-16T06:53:12Z | null | Conference on Empirical Methods in Natural Language Processing
(EMNLP), ACL, Dec 2023, Singapore, Singapore. pp.10372-10382 | null | Learning to Rank Context for Named Entity Recognition Using a Synthetic Dataset | ['Arthur Amalvy', 'Vincent Labatut', 'Richard Dufour'] | 2,023 | Conference on Empirical Methods in Natural Language Processing | 9 | 29 | ['Computer Science'] |
2,310.10159 | Joint Music and Language Attention Models for Zero-shot Music Tagging | ['Xingjian Du', 'Zhesong Yu', 'Jiaju Lin', 'Bilei Zhu', 'Qiuqiang Kong'] | ['cs.SD', 'cs.CL', 'eess.AS'] | Music tagging is a task to predict the tags of music recordings. However,
previous music tagging research primarily focuses on close-set music tagging
tasks which can not be generalized to new tags. In this work, we propose a
zero-shot music tagging system modeled by a joint music and language attention
(JMLA) model to... | 2023-10-16T08:00:16Z | \begin{keywords} Music tagging, joint music and language attention
models, Music Foundation Model. \end{keywords} | null | null | Joint Music and Language Attention Models for Zero-Shot Music Tagging | ['Xingjian Du', 'Zhesong Yu', 'Jiaju Lin', 'Bilei Zhu', 'Qiuqiang Kong'] | 2,023 | IEEE International Conference on Acoustics, Speech, and Signal Processing | 9 | 25 | ['Computer Science', 'Engineering'] |
2,310.10482 | xCOMET: Transparent Machine Translation Evaluation through Fine-grained
Error Detection | ['Nuno M. Guerreiro', 'Ricardo Rei', 'Daan van Stigt', 'Luisa Coheur', 'Pierre Colombo', 'André F. T. Martins'] | ['cs.CL'] | Widely used learned metrics for machine translation evaluation, such as COMET
and BLEURT, estimate the quality of a translation hypothesis by providing a
single sentence-level score. As such, they offer little insight into
translation errors (e.g., what are the errors and what is their severity). On
the other hand, gen... | 2023-10-16T15:03:14Z | Work in progress | null | null | null | null | null | null | null | null | null |
2,310.10505 | ReMax: A Simple, Effective, and Efficient Reinforcement Learning Method
for Aligning Large Language Models | ['Ziniu Li', 'Tian Xu', 'Yushun Zhang', 'Zhihang Lin', 'Yang Yu', 'Ruoyu Sun', 'Zhi-Quan Luo'] | ['cs.LG'] | Reinforcement Learning from Human Feedback (RLHF) is key to aligning Large
Language Models (LLMs), typically paired with the Proximal Policy Optimization
(PPO) algorithm. While PPO is a powerful method designed for general
reinforcement learning tasks, it is overly sophisticated for LLMs, leading to
laborious hyper-par... | 2023-10-16T15:25:14Z | null | null | null | null | null | null | null | null | null | null |
2,310.10631 | Llemma: An Open Language Model For Mathematics | ['Zhangir Azerbayev', 'Hailey Schoelkopf', 'Keiran Paster', 'Marco Dos Santos', 'Stephen McAleer', 'Albert Q. Jiang', 'Jia Deng', 'Stella Biderman', 'Sean Welleck'] | ['cs.CL', 'cs.AI', 'cs.LO'] | We present Llemma, a large language model for mathematics. We continue
pretraining Code Llama on the Proof-Pile-2, a mixture of scientific papers, web
data containing mathematics, and mathematical code, yielding Llemma. On the
MATH benchmark Llemma outperforms all known open base models, as well as the
unreleased Miner... | 2023-10-16T17:54:07Z | Updated references; corrected description of COPRA search budget | null | null | Llemma: An Open Language Model For Mathematics | ['Zhangir Azerbayev', 'Hailey Schoelkopf', 'Keiran Paster', 'Marco Dos Santos', 'S. McAleer', 'Albert Q. Jiang', 'Jia Deng', 'Stella Biderman', 'S. Welleck'] | 2,023 | International Conference on Learning Representations | 303 | 92 | ['Computer Science'] |
2,310.10636 | Dual-Encoders for Extreme Multi-Label Classification | ['Nilesh Gupta', 'Devvrit Khatri', 'Ankit S Rawat', 'Srinadh Bhojanapalli', 'Prateek Jain', 'Inderjit Dhillon'] | ['cs.LG'] | Dual-encoder (DE) models are widely used in retrieval tasks, most commonly
studied on open QA benchmarks that are often characterized by multi-class and
limited training data. In contrast, their performance in multi-label and
data-rich retrieval settings like extreme multi-label classification (XMC),
remains under-expl... | 2023-10-16T17:55:43Z | 27 pages, 8 figures | ICLR 2024 camera-ready publication | null | null | null | null | null | null | null | null |
2,310.10638 | In-context Pretraining: Language Modeling Beyond Document Boundaries | ['Weijia Shi', 'Sewon Min', 'Maria Lomeli', 'Chunting Zhou', 'Margaret Li', 'Gergely Szilvasy', 'Rich James', 'Xi Victoria Lin', 'Noah A. Smith', 'Luke Zettlemoyer', 'Scott Yih', 'Mike Lewis'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Large language models (LMs) are currently trained to predict tokens given
document prefixes, enabling them to directly perform long-form generation and
prompting-style tasks which can be reduced to document completion. Existing
pretraining pipelines train LMs by concatenating random sets of short documents
to create in... | 2023-10-16T17:57:12Z | null | null | null | null | null | null | null | null | null | null |
2,310.10688 | A decoder-only foundation model for time-series forecasting | ['Abhimanyu Das', 'Weihao Kong', 'Rajat Sen', 'Yichen Zhou'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Motivated by recent advances in large language models for Natural Language
Processing (NLP), we design a time-series foundation model for forecasting
whose out-of-the-box zero-shot performance on a variety of public datasets
comes close to the accuracy of state-of-the-art supervised forecasting models
for each individu... | 2023-10-14T17:01:37Z | null | null | null | null | null | null | null | null | null | null |
2,310.10773 | Gotta be SAFE: A New Framework for Molecular Design | ['Emmanuel Noutahi', 'Cristian Gabellini', 'Michael Craig', 'Jonathan S. C Lim', 'Prudencio Tossou'] | ['cs.LG', 'q-bio.BM'] | Traditional molecular string representations, such as SMILES, often pose
challenges for AI-driven molecular design due to their non-sequential depiction
of molecular substructures. To address this issue, we introduce Sequential
Attachment-based Fragment Embedding (SAFE), a novel line notation for chemical
structures. S... | 2023-10-16T19:12:56Z | Code, data and models available at:
https://github.com/datamol-io/safe/ | null | null | null | null | null | null | null | null | null |
2,310.1092 | NuclearQA: A Human-Made Benchmark for Language Models for the Nuclear
Domain | ['Anurag Acharya', 'Sai Munikoti', 'Aaron Hellinger', 'Sara Smith', 'Sridevi Wagle', 'Sameera Horawalavithana'] | ['cs.CL', 'cs.AI', 'I.2.7'] | As LLMs have become increasingly popular, they have been used in almost every
field. But as the application for LLMs expands from generic fields to narrow,
focused science domains, there exists an ever-increasing gap in ways to
evaluate their efficacy in those fields. For the benchmarks that do exist, a
lot of them foc... | 2023-10-17T01:27:20Z | 9 pages | null | null | null | null | null | null | null | null | null |
2,310.10962 | Large Language Models can Contrastively Refine their Generation for
Better Sentence Representation Learning | ['Huiming Wang', 'Zhaodonghui Li', 'Liying Cheng', 'Soh De Wen', 'Lidong Bing'] | ['cs.CL'] | Recently, large language models (LLMs) have emerged as a groundbreaking
technology and their unparalleled text generation capabilities have sparked
interest in their application to the fundamental sentence representation
learning task. Existing methods have explored utilizing LLMs as data annotators
to generate synthes... | 2023-10-17T03:21:43Z | NAACL 2024 | null | null | null | null | null | null | null | null | null |
2,310.10981 | Instructive Dialogue Summarization with Query Aggregations | ['Bin Wang', 'Zhengyuan Liu', 'Nancy F. Chen'] | ['cs.CL'] | Conventional dialogue summarization methods directly generate summaries and
do not consider user's specific interests. This poses challenges in cases where
the users are more focused on particular topics or aspects. With the
advancement of instruction-finetuned language models, we introduce
instruction-tuning to dialog... | 2023-10-17T04:03:00Z | EMNLP 2023 Main Conference - Summarization (update for
acknowledgement) | null | null | Instructive Dialogue Summarization with Query Aggregations | ['Bin Wang', 'Zhengyuan Liu', 'Nancy F. Chen'] | 2,023 | Conference on Empirical Methods in Natural Language Processing | 3 | 54 | ['Computer Science'] |
2,310.11081 | Understanding writing style in social media with a supervised
contrastively pre-trained transformer | ['Javier Huertas-Tato', 'Alejandro Martin', 'David Camacho'] | ['cs.CL', 'cs.SI'] | Online Social Networks serve as fertile ground for harmful behavior, ranging
from hate speech to the dissemination of disinformation. Malicious actors now
have unprecedented freedom to misbehave, leading to severe societal unrest and
dire consequences, as exemplified by events such as the Capitol assault during
the US ... | 2023-10-17T09:01:17Z | null | null | null | null | null | null | null | null | null | null |
2,310.1123 | Zipformer: A faster and better encoder for automatic speech recognition | ['Zengwei Yao', 'Liyong Guo', 'Xiaoyu Yang', 'Wei Kang', 'Fangjun Kuang', 'Yifan Yang', 'Zengrui Jin', 'Long Lin', 'Daniel Povey'] | ['eess.AS', 'cs.LG', 'cs.SD'] | The Conformer has become the most popular encoder model for automatic speech
recognition (ASR). It adds convolution modules to a transformer to learn both
local and global dependencies. In this work we describe a faster, more
memory-efficient, and better-performing transformer, called Zipformer. Modeling
changes includ... | 2023-10-17T13:01:10Z | Published as a conference paper at ICLR 2024 | null | null | null | null | null | null | null | null | null |
2,310.11275 | xMEN: A Modular Toolkit for Cross-Lingual Medical Entity Normalization | ['Florian Borchert', 'Ignacio Llorca', 'Roland Roller', 'Bert Arnrich', 'Matthieu-P. Schapranow'] | ['cs.CL'] | Objective: To improve performance of medical entity normalization across many
languages, especially when fewer language resources are available compared to
English.
Materials and Methods: We introduce xMEN, a modular system for cross-lingual
medical entity normalization, which performs well in both low- and
high-reso... | 2023-10-17T13:53:57Z | 16 pages, 3 figures | JAMIA Open, Volume 8, Issue 1, February 2025, ooae147 | 10.1093/jamiaopen/ooae147 | null | null | null | null | null | null | null |
2,310.11441 | Set-of-Mark Prompting Unleashes Extraordinary Visual Grounding in GPT-4V | ['Jianwei Yang', 'Hao Zhang', 'Feng Li', 'Xueyan Zou', 'Chunyuan Li', 'Jianfeng Gao'] | ['cs.CV', 'cs.AI', 'cs.CL', 'cs.HC'] | We present Set-of-Mark (SoM), a new visual prompting method, to unleash the
visual grounding abilities of large multimodal models (LMMs), such as GPT-4V.
As illustrated in Fig. 1 (right), we employ off-the-shelf interactive
segmentation models, such as SEEM/SAM, to partition an image into regions at
different levels of... | 2023-10-17T17:51:31Z | null | null | null | null | null | null | null | null | null | null |
2,310.11448 | 4K4D: Real-Time 4D View Synthesis at 4K Resolution | ['Zhen Xu', 'Sida Peng', 'Haotong Lin', 'Guangzhao He', 'Jiaming Sun', 'Yujun Shen', 'Hujun Bao', 'Xiaowei Zhou'] | ['cs.CV'] | This paper targets high-fidelity and real-time view synthesis of dynamic 3D
scenes at 4K resolution. Recently, some methods on dynamic view synthesis have
shown impressive rendering quality. However, their speed is still limited when
rendering high-resolution images. To overcome this problem, we propose 4K4D, a
4D poin... | 2023-10-17T17:57:38Z | Project Page: https://zju3dv.github.io/4k4d | null | null | 4K4D: Real-Time 4D View Synthesis at 4K Resolution | ['Zhen Xu', 'Sida Peng', 'Haotong Lin', 'Guangzhao He', 'Jiaming Sun', 'Yujun Shen', 'Hujun Bao', 'Xiaowei Zhou'] | 2,023 | Computer Vision and Pattern Recognition | 61 | 106 | ['Computer Science'] |
2,310.11511 | Self-RAG: Learning to Retrieve, Generate, and Critique through
Self-Reflection | ['Akari Asai', 'Zeqiu Wu', 'Yizhong Wang', 'Avirup Sil', 'Hannaneh Hajishirzi'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Despite their remarkable capabilities, large language models (LLMs) often
produce responses containing factual inaccuracies due to their sole reliance on
the parametric knowledge they encapsulate. Retrieval-Augmented Generation
(RAG), an ad hoc approach that augments LMs with retrieval of relevant
knowledge, decreases ... | 2023-10-17T18:18:32Z | 30 pages, 2 figures, 12 tables | null | null | null | null | null | null | null | null | null |
2,310.11716 | Reflection-Tuning: Data Recycling Improves LLM Instruction-Tuning | ['Ming Li', 'Lichang Chen', 'Jiuhai Chen', 'Shwai He', 'Heng Huang', 'Jiuxiang Gu', 'Tianyi Zhou'] | ['cs.CL'] | Recent advancements in Large Language Models (LLMs) have expanded the
horizons of natural language understanding and generation. Notably, the output
control and alignment with the input of LLMs can be refined through instruction
tuning. However, as highlighted in several studies, low-quality data in the
training set ar... | 2023-10-18T05:13:47Z | null | null | null | Reflection-Tuning: Data Recycling Improves LLM Instruction-Tuning | ['Ming Li', 'Lichang Chen', 'Jiuhai Chen', 'Shwai He', 'Heng Huang', 'Jiuxiang Gu', 'Tianyi Zhou'] | 2,023 | arXiv.org | 24 | 43 | ['Computer Science'] |
2,310.12036 | A General Theoretical Paradigm to Understand Learning from Human
Preferences | ['Mohammad Gheshlaghi Azar', 'Mark Rowland', 'Bilal Piot', 'Daniel Guo', 'Daniele Calandriello', 'Michal Valko', 'Rémi Munos'] | ['cs.AI', 'cs.LG', 'stat.ML'] | The prevalent deployment of learning from human preferences through
reinforcement learning (RLHF) relies on two important approximations: the first
assumes that pairwise preferences can be substituted with pointwise rewards.
The second assumes that a reward model trained on these pointwise rewards can
generalize from c... | 2023-10-18T15:21:28Z | null | null | null | null | null | null | null | null | null | null |
2,310.12109 | Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture | ['Daniel Y. Fu', 'Simran Arora', 'Jessica Grogan', 'Isys Johnson', 'Sabri Eyuboglu', 'Armin W. Thomas', 'Benjamin Spector', 'Michael Poli', 'Atri Rudra', 'Christopher Ré'] | ['cs.LG'] | Machine learning models are increasingly being scaled in both sequence length
and model dimension to reach longer contexts and better performance. However,
existing architectures such as Transformers scale quadratically along both
these axes. We ask: are there performant architectures that can scale
sub-quadratically a... | 2023-10-18T17:06:22Z | NeurIPS 2023 (Oral) | null | null | Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture | ['Daniel Y. Fu', 'Simran Arora', 'Jessica Grogan', 'Isys Johnson', 'Sabri Eyuboglu', 'Armin W. Thomas', 'B. Spector', 'Michael Poli', 'A. Rudra', "Christopher R'e"] | 2,023 | Neural Information Processing Systems | 52 | 0 | ['Computer Science'] |
2,310.1219 | DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors | ['Jinbo Xing', 'Menghan Xia', 'Yong Zhang', 'Haoxin Chen', 'Wangbo Yu', 'Hanyuan Liu', 'Xintao Wang', 'Tien-Tsin Wong', 'Ying Shan'] | ['cs.CV'] | Animating a still image offers an engaging visual experience. Traditional
image animation techniques mainly focus on animating natural scenes with
stochastic dynamics (e.g. clouds and fluid) or domain-specific motions (e.g.
human hair or body motions), and thus limits their applicability to more
general visual content.... | 2023-10-18T14:42:16Z | Project page: https://doubiiu.github.io/projects/DynamiCrafter | null | null | null | null | null | null | null | null | null |
2,310.12371 | Property-Aware Multi-Speaker Data Simulation: A Probabilistic Modelling
Technique for Synthetic Data Generation | ['Tae Jin Park', 'He Huang', 'Coleman Hooper', 'Nithin Koluguri', 'Kunal Dhawan', 'Ante Jukic', 'Jagadeesh Balam', 'Boris Ginsburg'] | ['eess.AS', 'cs.SD'] | We introduce a sophisticated multi-speaker speech data simulator,
specifically engineered to generate multi-speaker speech recordings. A notable
feature of this simulator is its capacity to modulate the distribution of
silence and overlap via the adjustment of statistical parameters. This
capability offers a tailored t... | 2023-10-18T22:46:20Z | null | CHiME-7 Workshop 2023 | null | Property-Aware Multi-Speaker Data Simulation: A Probabilistic Modelling Technique for Synthetic Data Generation | ['T. Park', 'He Huang', 'Coleman Hooper', 'N. Koluguri', 'Kunal Dhawan', 'Ante Jukic', 'Jagadeesh Balam', 'Boris Ginsburg'] | 2,023 | 7th International Workshop on Speech Processing in Everyday Environments (CHiME 2023) | 7 | 24 | ['Engineering', 'Computer Science'] |
2,310.12378 | The CHiME-7 Challenge: System Description and Performance of NeMo Team's
DASR System | ['Tae Jin Park', 'He Huang', 'Ante Jukic', 'Kunal Dhawan', 'Krishna C. Puvvada', 'Nithin Koluguri', 'Nikolay Karpov', 'Aleksandr Laptev', 'Jagadeesh Balam', 'Boris Ginsburg'] | ['eess.AS', 'cs.SD'] | We present the NVIDIA NeMo team's multi-channel speech recognition system for
the 7th CHiME Challenge Distant Automatic Speech Recognition (DASR) Task,
focusing on the development of a multi-channel, multi-speaker speech
recognition system tailored to transcribe speech from distributed microphones
and microphone arrays... | 2023-10-18T23:10:46Z | null | CHiME-7 Workshop 2023 | null | null | null | null | null | null | null | null |
2,310.12537 | ExtractGPT: Exploring the Potential of Large Language Models for Product
Attribute Value Extraction | ['Alexander Brinkmann', 'Roee Shraga', 'Christian Bizer'] | ['cs.CL'] | E-commerce platforms require structured product data in the form of
attribute-value pairs to offer features such as faceted product search or
attribute-based product comparison. However, vendors often provide unstructured
product descriptions, necessitating the extraction of attribute-value pairs
from these texts. BERT... | 2023-10-19T07:39:00Z | null | null | null | null | null | null | null | null | null | null |
2,310.12773 | Safe RLHF: Safe Reinforcement Learning from Human Feedback | ['Josef Dai', 'Xuehai Pan', 'Ruiyang Sun', 'Jiaming Ji', 'Xinbo Xu', 'Mickel Liu', 'Yizhou Wang', 'Yaodong Yang'] | ['cs.AI', 'cs.LG'] | With the development of large language models (LLMs), striking a balance
between the performance and safety of AI systems has never been more critical.
However, the inherent tension between the objectives of helpfulness and
harmlessness presents a significant challenge during LLM training. To address
this issue, we pro... | 2023-10-19T14:22:03Z | null | null | null | null | null | null | null | null | null | null |
2,310.12823 | AgentTuning: Enabling Generalized Agent Abilities for LLMs | ['Aohan Zeng', 'Mingdao Liu', 'Rui Lu', 'Bowen Wang', 'Xiao Liu', 'Yuxiao Dong', 'Jie Tang'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Open large language models (LLMs) with great performance in various tasks
have significantly advanced the development of LLMs. However, they are far
inferior to commercial models such as ChatGPT and GPT-4 when acting as agents
to tackle complex tasks in the real world. These agent tasks employ LLMs as the
central contr... | 2023-10-19T15:19:53Z | 31 pages | null | null | null | null | null | null | null | null | null |
2,310.13017 | Position Interpolation Improves ALiBi Extrapolation | ['Faisal Al-Khateeb', 'Nolan Dey', 'Daria Soboleva', 'Joel Hestness'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Linear position interpolation helps pre-trained models using rotary position
embeddings (RoPE) to extrapolate to longer sequence lengths. We propose using
linear position interpolation to extend the extrapolation range of models using
Attention with Linear Biases (ALiBi). We find position interpolation
significantly im... | 2023-10-18T16:41:47Z | 4 pages content, 1 page references, 4 figures | null | null | Position Interpolation Improves ALiBi Extrapolation | ['Faisal Al-Khateeb', 'Nolan Dey', 'Daria Soboleva', 'Joel Hestness'] | 2,023 | arXiv.org | 5 | 12 | ['Computer Science'] |
2,310.13127 | Auto-Instruct: Automatic Instruction Generation and Ranking for
Black-Box Language Models | ['Zhihan Zhang', 'Shuohang Wang', 'Wenhao Yu', 'Yichong Xu', 'Dan Iter', 'Qingkai Zeng', 'Yang Liu', 'Chenguang Zhu', 'Meng Jiang'] | ['cs.CL'] | Large language models (LLMs) can perform a wide range of tasks by following
natural language instructions, without the necessity of task-specific
fine-tuning. Unfortunately, the performance of LLMs is greatly influenced by
the quality of these instructions, and manually writing effective instructions
for each task is a... | 2023-10-19T19:52:55Z | Accepted to EMNLP 2023 Findings. Work was done before July 2023 | null | null | Auto-Instruct: Automatic Instruction Generation and Ranking for Black-Box Language Models | ['Zhihan Zhang', 'Shuo Wang', 'W. Yu', 'Yichong Xu', 'Dan Iter', 'Qingkai Zeng', 'Yang Liu', 'Chenguang Zhu', 'Meng Jiang'] | 2,023 | Conference on Empirical Methods in Natural Language Processing | 24 | 38 | ['Computer Science'] |
2,310.13259 | Domain-specific optimization and diverse evaluation of self-supervised
models for histopathology | ['Jeremy Lai', 'Faruk Ahmed', 'Supriya Vijay', 'Tiam Jaroensri', 'Jessica Loo', 'Saurabh Vyawahare', 'Saloni Agarwal', 'Fayaz Jamil', 'Yossi Matias', 'Greg S. Corrado', 'Dale R. Webster', 'Jonathan Krause', 'Yun Liu', 'Po-Hsuan Cameron Chen', 'Ellery Wulczyn', 'David F. Steiner'] | ['eess.IV', 'cs.CV'] | Task-specific deep learning models in histopathology offer promising
opportunities for improving diagnosis, clinical research, and precision
medicine. However, development of such models is often limited by availability
of high-quality data. Foundation models in histopathology that learn general
representations across ... | 2023-10-20T03:38:07Z | 4 main tables, 3 main figures, additional supplemental tables and
figures | null | null | null | null | null | null | null | null | null |
2,310.13289 | SALMONN: Towards Generic Hearing Abilities for Large Language Models | ['Changli Tang', 'Wenyi Yu', 'Guangzhi Sun', 'Xianzhao Chen', 'Tian Tan', 'Wei Li', 'Lu Lu', 'Zejun Ma', 'Chao Zhang'] | ['cs.SD', 'cs.CL', 'eess.AS'] | Hearing is arguably an essential ability of artificial intelligence (AI)
agents in the physical world, which refers to the perception and understanding
of general auditory information consisting of at least three types of sounds:
speech, audio events, and music. In this paper, we propose SALMONN, a speech
audio languag... | 2023-10-20T05:41:57Z | null | null | null | SALMONN: Towards Generic Hearing Abilities for Large Language Models | ['Changli Tang', 'Wenyi Yu', 'Guangzhi Sun', 'Xianzhao Chen', 'Tian Tan', 'Wei Li', 'Lu Lu', 'Zejun Ma', 'Chao Zhang'] | 2,023 | International Conference on Learning Representations | 264 | 59 | ['Computer Science', 'Engineering'] |
2,310.1342 | Conversation Chronicles: Towards Diverse Temporal and Relational
Dynamics in Multi-Session Conversations | ['Jihyoung Jang', 'Minseong Boo', 'Hyounghun Kim'] | ['cs.CL'] | In the field of natural language processing, open-domain chatbots have
emerged as an important research topic. However, a major limitation of existing
open-domain chatbot research is its singular focus on short single-session
dialogue, neglecting the potential need for understanding contextual
information in multiple c... | 2023-10-20T11:06:21Z | EMNLP 2023 (23 pages); Project website:
https://conversation-chronicles.github.io | null | null | null | null | null | null | null | null | null |
2,310.13639 | Contrastive Preference Learning: Learning from Human Feedback without RL | ['Joey Hejna', 'Rafael Rafailov', 'Harshit Sikchi', 'Chelsea Finn', 'Scott Niekum', 'W. Bradley Knox', 'Dorsa Sadigh'] | ['cs.LG', 'cs.AI'] | Reinforcement Learning from Human Feedback (RLHF) has emerged as a popular
paradigm for aligning models with human intent. Typically RLHF algorithms
operate in two phases: first, use human preferences to learn a reward function
and second, align the model by optimizing the learned reward via reinforcement
learning (RL)... | 2023-10-20T16:37:56Z | ICLR 2024. Code released at https://github.com/jhejna/cpl | null | null | null | null | null | null | null | null | null |
2,310.13683 | CAPIVARA: Cost-Efficient Approach for Improving Multilingual CLIP
Performance on Low-Resource Languages | ['Gabriel Oliveira dos Santos', 'Diego A. B. Moreira', 'Alef Iury Ferreira', 'Jhessica Silva', 'Luiz Pereira', 'Pedro Bueno', 'Thiago Sousa', 'Helena Maia', 'Nádia Da Silva', 'Esther Colombini', 'Helio Pedrini', 'Sandra Avila'] | ['cs.LG'] | This work introduces CAPIVARA, a cost-efficient framework designed to enhance
the performance of multilingual CLIP models in low-resource languages. While
CLIP has excelled in zero-shot vision-language tasks, the resource-intensive
nature of model training remains challenging. Many datasets lack linguistic
diversity, f... | 2023-10-20T17:44:25Z | null | null | null | CAPIVARA: Cost-Efficient Approach for Improving Multilingual CLIP Performance on Low-Resource Languages | ['G. O. D. Santos', 'Diego A. B. Moreira', 'Alef Iury Siqueira Ferreira', 'Jhessica Silva', 'Luiz Pereira', 'Pedro Bueno', 'Thiago Sousa', 'H. Maia', "N'adia da Silva", 'Esther Colombini', 'Helio Pedrini', 'Sandra Avila'] | 2,023 | MRL | 5 | 60 | ['Computer Science'] |
2,310.13895 | RTSUM: Relation Triple-based Interpretable Summarization with
Multi-level Salience Visualization | ['Seonglae Cho', 'Yonggi Cho', 'HoonJae Lee', 'Myungha Jang', 'Jinyoung Yeo', 'Dongha Lee'] | ['cs.CL', 'cs.LG'] | In this paper, we present RTSUM, an unsupervised summarization framework that
utilizes relation triples as the basic unit for summarization. Given an input
document, RTSUM first selects salient relation triples via multi-level salience
scoring and then generates a concise summary from the selected relation triples
by u... | 2023-10-21T02:46:03Z | 8 pages, 2 figures | null | 10.18653/v1/2024.naacl-demo.5 | RTSUM: Relation Triple-based Interpretable Summarization with Multi-level Salience Visualization | ['Seonglae Cho', 'Yonggi Cho', 'HoonJae Lee', 'Myungha Jang', 'Jinyoung Yeo', 'Dongha Lee'] | 2,023 | North American Chapter of the Association for Computational Linguistics | 0 | 32 | ['Computer Science'] |
2,310.14282 | NERetrieve: Dataset for Next Generation Named Entity Recognition and
Retrieval | ['Uri Katz', 'Matan Vetzler', 'Amir DN Cohen', 'Yoav Goldberg'] | ['cs.CL', 'cs.AI', 'cs.IR'] | Recognizing entities in texts is a central need in many information-seeking
scenarios, and indeed, Named Entity Recognition (NER) is arguably one of the
most successful examples of a widely adopted NLP task and corresponding NLP
technology. Recent advances in large language models (LLMs) appear to provide
effective sol... | 2023-10-22T12:23:00Z | Findings of EMNLP 2023 | null | null | NERetrieve: Dataset for Next Generation Named Entity Recognition and Retrieval | ['Uri Katz', 'Matan Vetzler', 'Amir D. N. Cohen', 'Yoav Goldberg'] | 2,023 | Conference on Empirical Methods in Natural Language Processing | 10 | 40 | ['Computer Science'] |
2,310.14478 | GeoLM: Empowering Language Models for Geospatially Grounded Language
Understanding | ['Zekun Li', 'Wenxuan Zhou', 'Yao-Yi Chiang', 'Muhao Chen'] | ['cs.CL'] | Humans subconsciously engage in geospatial reasoning when reading articles.
We recognize place names and their spatial relations in text and mentally
associate them with their physical locations on Earth. Although pretrained
language models can mimic this cognitive process using linguistic context, they
do not utilize ... | 2023-10-23T01:20:01Z | Accepted to EMNLP23 main | null | null | null | null | null | null | null | null | null |
2,310.14557 | The Skipped Beat: A Study of Sociopragmatic Understanding in LLMs for 64
Languages | ['Chiyu Zhang', 'Khai Duy Doan', 'Qisheng Liao', 'Muhammad Abdul-Mageed'] | ['cs.CL'] | Instruction tuned large language models (LLMs), such as ChatGPT, demonstrate
remarkable performance in a wide range of tasks. Despite numerous recent
studies that examine the performance of instruction-tuned LLMs on various NLP
benchmarks, there remains a lack of comprehensive investigation into their
ability to unders... | 2023-10-23T04:22:44Z | Accepted by EMNLP 2023 Main conference | null | null | null | null | null | null | null | null | null |
2,310.14558 | AlpaCare:Instruction-tuned Large Language Models for Medical Application | ['Xinlu Zhang', 'Chenxin Tian', 'Xianjun Yang', 'Lichang Chen', 'Zekun Li', 'Linda Ruth Petzold'] | ['cs.CL', 'cs.AI'] | Instruction-finetuning (IFT) has become crucial in aligning Large Language
Models (LLMs) with diverse human needs and has shown great potential in medical
applications. However, previous studies mainly fine-tune LLMs on biomedical
datasets with limited diversity, which often rely on benchmarks or narrow task
scopes, an... | 2023-10-23T04:22:50Z | null | null | null | null | null | null | null | null | null | null |
2,310.14684 | SpEL: Structured Prediction for Entity Linking | ['Hassan S. Shavarani', 'Anoop Sarkar'] | ['cs.CL'] | Entity linking is a prominent thread of research focused on structured data
creation by linking spans of text to an ontology or knowledge source. We
revisit the use of structured prediction for entity linking which classifies
each individual input token as an entity, and aggregates the token predictions.
Our system, ca... | 2023-10-23T08:24:35Z | null | null | null | SpEL: Structured Prediction for Entity Linking | ['Hassan S. Shavarani', 'Anoop Sarkar'] | 2,023 | Conference on Empirical Methods in Natural Language Processing | 12 | 66 | ['Computer Science'] |
2,310.14757 | SuperTweetEval: A Challenging, Unified and Heterogeneous Benchmark for
Social Media NLP Research | ['Dimosthenis Antypas', 'Asahi Ushio', 'Francesco Barbieri', 'Leonardo Neves', 'Kiamehr Rezaee', 'Luis Espinosa-Anke', 'Jiaxin Pei', 'Jose Camacho-Collados'] | ['cs.CL'] | Despite its relevance, the maturity of NLP for social media pales in
comparison with general-purpose models, metrics and benchmarks. This fragmented
landscape makes it hard for the community to know, for instance, given a task,
which is the best performing model and how it compares with others. To
alleviate this issue,... | 2023-10-23T09:48:25Z | EMNLP 2023 Findings | null | null | null | null | null | null | null | null | null |
2,310.14947 | System Combination via Quality Estimation for Grammatical Error
Correction | ['Muhammad Reza Qorib', 'Hwee Tou Ng'] | ['cs.CL'] | Quality estimation models have been developed to assess the corrections made
by grammatical error correction (GEC) models when the reference or
gold-standard corrections are not available. An ideal quality estimator can be
utilized to combine the outputs of multiple GEC systems by choosing the best
subset of edits from... | 2023-10-23T13:46:49Z | EMNLP 2023 | null | null | System Combination via Quality Estimation for Grammatical Error Correction | ['Muhammad Reza Qorib', 'Hwee Tou Ng'] | 2,023 | Conference on Empirical Methods in Natural Language Processing | 5 | 41 | ['Computer Science'] |
2,310.15111 | Matryoshka Diffusion Models | ['Jiatao Gu', 'Shuangfei Zhai', 'Yizhe Zhang', 'Josh Susskind', 'Navdeep Jaitly'] | ['cs.CV', 'cs.LG'] | Diffusion models are the de facto approach for generating high-quality images
and videos, but learning high-dimensional models remains a formidable task due
to computational and optimization challenges. Existing methods often resort to
training cascaded models in pixel space or using a downsampled latent space of
a sep... | 2023-10-23T17:20:01Z | Accepted by ICLR2024 | null | null | null | null | null | null | null | null | null |
2,310.152 | Open-Set Image Tagging with Multi-Grained Text Supervision | ['Xinyu Huang', 'Yi-Jie Huang', 'Youcai Zhang', 'Weiwei Tian', 'Rui Feng', 'Yuejie Zhang', 'Yanchun Xie', 'Yaqian Li', 'Lei Zhang'] | ['cs.CV'] | In this paper, we introduce the Recognize Anything Plus Model (RAM++), an
open-set image tagging model effectively leveraging multi-grained text
supervision. Previous approaches (e.g., CLIP) primarily utilize global text
supervision paired with images, leading to sub-optimal performance in
recognizing multiple individu... | 2023-10-23T08:13:33Z | Homepage: https://github.com/xinyu1205/recognize-anything | null | null | Open-Set Image Tagging with Multi-Grained Text Supervision | ['Xinyu Huang', 'Yi-Jie Huang', 'Youcai Zhang', 'Weiwei Tian', 'Rui Feng', 'Yuejie Zhang', 'Yanchun Xie', 'Yaqian Li', 'Lei Zhang'] | 2,023 | null | 35 | 60 | ['Computer Science'] |
2,310.15777 | MindLLM: Pre-training Lightweight Large Language Model from Scratch,
Evaluations and Domain Applications | ['Yizhe Yang', 'Huashan Sun', 'Jiawei Li', 'Runheng Liu', 'Yinghao Li', 'Yuhang Liu', 'Heyan Huang', 'Yang Gao'] | ['cs.CL', 'cs.AI'] | Large Language Models (LLMs) have demonstrated remarkable performance across
various natural language tasks, marking significant strides towards general
artificial intelligence. While general artificial intelligence is leveraged by
developing increasingly large-scale models, there could be another branch to
develop lig... | 2023-10-24T12:22:34Z | Working in progress | null | null | MindLLM: Pre-training Lightweight Large Language Model from Scratch, Evaluations and Domain Applications | ['Yizhe Yang', 'Huashan Sun', 'Jiawei Li', 'Runheng Liu', 'Yinghao Li', 'Yuhang Liu', 'Heyan Huang', 'Yang Gao'] | 2,023 | arXiv.org | 10 | 103 | ['Computer Science'] |
2,310.15799 | DALE: Generative Data Augmentation for Low-Resource Legal NLP | ['Sreyan Ghosh', 'Chandra Kiran Evuru', 'Sonal Kumar', 'S Ramaneswaran', 'S Sakshi', 'Utkarsh Tyagi', 'Dinesh Manocha'] | ['cs.CL', 'cs.AI'] | We present DALE, a novel and effective generative Data Augmentation framework
for low-resource LEgal NLP. DALE addresses the challenges existing frameworks
pose in generating effective data augmentations of legal documents - legal
language, with its specialized vocabulary and complex semantics, morphology,
and syntax, ... | 2023-10-24T12:50:28Z | Accepted to EMNLP 2023 Main Conference. Code:
https://github.com/Sreyan88/DALE | null | null | DALE: Generative Data Augmentation for Low-Resource Legal NLP | ['Sreyan Ghosh', 'Chandra Kiran Reddy Evuru', 'Sonal Kumar', 'S Ramaneswaran', 'S. Sakshi', 'Utkarsh Tyagi', 'Dinesh Manocha'] | 2,023 | Conference on Empirical Methods in Natural Language Processing | 13 | 0 | ['Computer Science'] |
2,310.16049 | MuSR: Testing the Limits of Chain-of-thought with Multistep Soft
Reasoning | ['Zayne Sprague', 'Xi Ye', 'Kaj Bostrom', 'Swarat Chaudhuri', 'Greg Durrett'] | ['cs.CL'] | While large language models (LLMs) equipped with techniques like
chain-of-thought prompting have demonstrated impressive capabilities, they
still fall short in their ability to reason robustly in complex settings.
However, evaluating LLM reasoning is challenging because system capabilities
continue to grow while benchm... | 2023-10-24T17:59:20Z | null | ICLR 2024 (Spotlight) | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.