arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,212.10233 | Pre-trained Language Models for Keyphrase Generation: A Thorough
Empirical Study | ['Di Wu', 'Wasi Uddin Ahmad', 'Kai-Wei Chang'] | ['cs.CL'] | Neural models that do not rely on pre-training have excelled in the keyphrase
generation task with large annotated datasets. Meanwhile, new approaches have
incorporated pre-trained language models (PLMs) for their data efficiency.
However, there lacks a systematic study of how the two types of approaches
compare and ho... | 2022-12-20T13:20:21Z | Technical Report. The contents are published in two separate papers
in EMNLP 2023 (arXiv:2310.06374) and LREC-COLING 2024 (arXiv:2402.14052) | null | null | null | null | null | null | null | null | null |
2,212.10315 | HINT: Hypernetwork Instruction Tuning for Efficient Zero- & Few-Shot
Generalisation | ['Hamish Ivison', 'Akshita Bhagia', 'Yizhong Wang', 'Hannaneh Hajishirzi', 'Matthew Peters'] | ['cs.CL'] | Recent NLP models have shown the remarkable ability to effectively generalise
`zero-shot' to new tasks using only natural language instructions as guidance.
However, many of these approaches suffer from high computational costs due to
their reliance on concatenating lengthy instructions with every input example,
result... | 2022-12-20T15:07:37Z | ACL 2023 | null | null | HINT: Hypernetwork Instruction Tuning for Efficient Zero- and Few-Shot Generalisation | ['Hamish Ivison', 'Akshita Bhagia', 'Yizhong Wang', 'Hannaneh Hajishirzi', 'Matthew E. Peters'] | 2,022 | Annual Meeting of the Association for Computational Linguistics | 20 | 49 | ['Computer Science'] |
2,212.10449 | Socratic Pretraining: Question-Driven Pretraining for Controllable
Summarization | ['Artidoro Pagnoni', 'Alexander R. Fabbri', 'Wojciech Kryściński', 'Chien-Sheng Wu'] | ['cs.CL'] | In long document controllable summarization, where labeled data is scarce,
pretrained models struggle to adapt to the task and effectively respond to user
queries. In this paper, we introduce Socratic pretraining, a question-driven,
unsupervised pretraining objective specifically designed to improve
controllability in ... | 2022-12-20T17:27:10Z | To appear at ACL 2023 | null | null | Socratic Pretraining: Question-Driven Pretraining for Controllable Summarization | ['Artidoro Pagnoni', 'Alexander R. Fabbri', 'Wojciech Kryscinski', 'Chien-Sheng Wu'] | 2,022 | Annual Meeting of the Association for Computational Linguistics | 18 | 62 | ['Computer Science'] |
2,212.10465 | SODA: Million-scale Dialogue Distillation with Social Commonsense
Contextualization | ['Hyunwoo Kim', 'Jack Hessel', 'Liwei Jiang', 'Peter West', 'Ximing Lu', 'Youngjae Yu', 'Pei Zhou', 'Ronan Le Bras', 'Malihe Alikhani', 'Gunhee Kim', 'Maarten Sap', 'Yejin Choi'] | ['cs.CL'] | Data scarcity has been a long standing issue in the field of open-domain
social dialogue. To quench this thirst, we present SODA: the first publicly
available, million-scale high-quality social dialogue dataset. By
contextualizing social commonsense knowledge from a knowledge graph, we are
able to distill an exceptiona... | 2022-12-20T17:38:47Z | EMNLP 2023. Dataset, model, and code can be found at
https://hyunw.kim/sodaverse | null | null | null | null | null | null | null | null | null |
2,212.10505 | DePlot: One-shot visual language reasoning by plot-to-table translation | ['Fangyu Liu', 'Julian Martin Eisenschlos', 'Francesco Piccinno', 'Syrine Krichene', 'Chenxi Pang', 'Kenton Lee', 'Mandar Joshi', 'Wenhu Chen', 'Nigel Collier', 'Yasemin Altun'] | ['cs.CL', 'cs.AI', 'cs.CV'] | Visual language such as charts and plots is ubiquitous in the human world.
Comprehending plots and charts requires strong reasoning skills. Prior
state-of-the-art (SOTA) models require at least tens of thousands of training
examples and their reasoning capabilities are still much limited, especially on
complex human-wr... | 2022-12-20T18:20:50Z | ACL 2023 (Findings) | null | null | null | null | null | null | null | null | null |
2,212.10511 | When Not to Trust Language Models: Investigating Effectiveness of
Parametric and Non-Parametric Memories | ['Alex Mallen', 'Akari Asai', 'Victor Zhong', 'Rajarshi Das', 'Daniel Khashabi', 'Hannaneh Hajishirzi'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Despite their impressive performance on diverse tasks, large language models
(LMs) still struggle with tasks requiring rich world knowledge, implying the
limitations of relying solely on their parameters to encode a wealth of world
knowledge. This paper aims to understand LMs' strengths and limitations in
memorizing fa... | 2022-12-20T18:30:15Z | ACL 2023; Code and data available at
https://github.com/AlexTMallen/adaptive-retrieval | null | null | null | null | null | null | null | null | null |
2,212.10544 | Pretraining Without Attention | ['Junxiong Wang', 'Jing Nathan Yan', 'Albert Gu', 'Alexander M. Rush'] | ['cs.CL', 'cs.LG'] | Transformers have been essential to pretraining success in NLP. While other
architectures have been used, downstream accuracy is either significantly
worse, or requires attention layers to match standard benchmarks such as GLUE.
This work explores pretraining without attention by using recent advances in
sequence routi... | 2022-12-20T18:50:08Z | null | null | null | Pretraining Without Attention | ['Junxiong Wang', 'J. Yan', 'Albert Gu', 'Alexander M. Rush'] | 2,022 | Conference on Empirical Methods in Natural Language Processing | 49 | 42 | ['Computer Science'] |
2,212.10551 | Lego-MT: Learning Detachable Models for Massively Multilingual Machine
Translation | ['Fei Yuan', 'Yinquan Lu', 'WenHao Zhu', 'Lingpeng Kong', 'Lei Li', 'Yu Qiao', 'Jingjing Xu'] | ['cs.CL', 'cs.AI'] | Multilingual neural machine translation (MNMT) aims to build a unified model
for many language directions. Existing monolithic models for MNMT encounter two
challenges: parameter interference among languages and inefficient inference
for large models. In this paper, we revisit the classic multi-way structures
and devel... | 2022-12-20T18:54:08Z | ACL 2023 Findings | null | null | Lego-MT: Learning Detachable Models for Massively Multilingual Machine Translation | ['Fei Yuan', 'Yinquan Lu', 'Wenhao Zhu', 'Lingpeng Kong', 'Lei Li', 'Jingjing Xu'] | 2,022 | Annual Meeting of the Association for Computational Linguistics | 26 | 38 | ['Computer Science'] |
2,212.10554 | A Length-Extrapolatable Transformer | ['Yutao Sun', 'Li Dong', 'Barun Patra', 'Shuming Ma', 'Shaohan Huang', 'Alon Benhaim', 'Vishrav Chaudhary', 'Xia Song', 'Furu Wei'] | ['cs.CL'] | Position modeling plays a critical role in Transformers. In this paper, we
focus on length extrapolation, i.e., training on short texts while evaluating
longer sequences. We define attention resolution as an indicator of
extrapolation. Then we propose two designs to improve the above metric of
Transformers. Specificall... | 2022-12-20T18:56:20Z | 9 pages | null | null | A Length-Extrapolatable Transformer | ['Yutao Sun', 'Li Dong', 'Barun Patra', 'Shuming Ma', 'Shaohan Huang', 'A. Benhaim', 'Vishrav Chaudhary', 'Xia Song', 'Furu Wei'] | 2,022 | Annual Meeting of the Association for Computational Linguistics | 124 | 42 | ['Computer Science'] |
2,212.1056 | Self-Instruct: Aligning Language Models with Self-Generated Instructions | ['Yizhong Wang', 'Yeganeh Kordi', 'Swaroop Mishra', 'Alisa Liu', 'Noah A. Smith', 'Daniel Khashabi', 'Hannaneh Hajishirzi'] | ['cs.CL', 'cs.AI'] | Large "instruction-tuned" language models (i.e., finetuned to respond to
instructions) have demonstrated a remarkable ability to generalize zero-shot to
new tasks. Nevertheless, they depend heavily on human-written instruction data
that is often limited in quantity, diversity, and creativity, therefore
hindering the ge... | 2022-12-20T18:59:19Z | ACL 2023 camera ready, 23 pages, 9 figures, 11 tables | null | null | Self-Instruct: Aligning Language Models with Self-Generated Instructions | ['Yizhong Wang', 'Yeganeh Kordi', 'Swaroop Mishra', 'Alisa Liu', 'Noah A. Smith', 'Daniel Khashabi', 'Hannaneh Hajishirzi'] | 2,022 | Annual Meeting of the Association for Computational Linguistics | 2,269 | 66 | ['Computer Science'] |
2,212.10726 | Beyond Contrastive Learning: A Variational Generative Model for
Multilingual Retrieval | ['John Wieting', 'Jonathan H. Clark', 'William W. Cohen', 'Graham Neubig', 'Taylor Berg-Kirkpatrick'] | ['cs.CL', 'cs.LG'] | Contrastive learning has been successfully used for retrieval of semantically
aligned sentences, but it often requires large batch sizes or careful
engineering to work well. In this paper, we instead propose a generative model
for learning multilingual text embeddings which can be used to retrieve or
score sentence pai... | 2022-12-21T02:41:40Z | Published as a long paper at ACL 2023 | null | null | Beyond Contrastive Learning: A Variational Generative Model for Multilingual Retrieval | ['J. Wieting', 'J. Clark', 'William W. Cohen', 'Graham Neubig', 'Taylor Berg-Kirkpatrick'] | 2,022 | Annual Meeting of the Association for Computational Linguistics | 6 | 51 | ['Computer Science'] |
2,212.10758 | ORCA: A Challenging Benchmark for Arabic Language Understanding | ['AbdelRahim Elmadany', 'El Moatez Billah Nagoudi', 'Muhammad Abdul-Mageed'] | ['cs.CL', 'cs.AI'] | Due to their crucial role in all NLP, several benchmarks have been proposed
to evaluate pretrained language models. In spite of these efforts, no public
benchmark of diverse nature currently exists for evaluation of Arabic. This
makes it challenging to measure progress for both Arabic and multilingual
language models. ... | 2022-12-21T04:35:43Z | All authors contributed equally. Accepted at ACL 2023, Toronto,
Canada | null | null | ORCA: A Challenging Benchmark for Arabic Language Understanding | ['AbdelRahim Elmadany', 'El Moatez Billah Nagoudi', 'M. Abdul-Mageed'] | 2,022 | Annual Meeting of the Association for Computational Linguistics | 46 | 127 | ['Computer Science'] |
2,212.10785 | SERENGETI: Massively Multilingual Language Models for Africa | ['Ife Adebara', 'AbdelRahim Elmadany', 'Muhammad Abdul-Mageed', 'Alcides Alcoba Inciarte'] | ['cs.CL', 'cs.AI'] | Multilingual pretrained language models (mPLMs) acquire valuable,
generalizable linguistic information during pretraining and have advanced the
state of the art on task-specific finetuning. To date, only ~31 out of ~2,000
African languages are covered in existing language models. We ameliorate this
limitation by develo... | 2022-12-21T05:54:14Z | To appear in Findings of ACL 2023 | null | null | null | null | null | null | null | null | null |
2,212.1114 | Benchmarking Large Language Models for Automated Verilog RTL Code
Generation | ['Shailja Thakur', 'Baleegh Ahmad', 'Zhenxing Fan', 'Hammond Pearce', 'Benjamin Tan', 'Ramesh Karri', 'Brendan Dolan-Gavitt', 'Siddharth Garg'] | ['cs.PL', 'cs.LG', 'cs.SE'] | Automating hardware design could obviate a significant amount of human error
from the engineering process and lead to fewer errors. Verilog is a popular
hardware description language to model and design digital systems, thus
generating Verilog code is a critical first step. Emerging large language
models (LLMs) are abl... | 2022-12-13T16:34:39Z | Accepted in DATE 2023. 7 pages, 4 tables, 7 figures | null | null | Benchmarking Large Language Models for Automated Verilog RTL Code Generation | ['Shailja Thakur', 'Baleegh Ahmad', 'Zhenxing Fan', 'H. Pearce', 'Benjamin Tan', 'R. Karri', 'Brendan Dolan-Gavitt', 'S. Garg'] | 2,022 | Design, Automation and Test in Europe | 141 | 15 | ['Computer Science'] |
2,212.11565 | Tune-A-Video: One-Shot Tuning of Image Diffusion Models for
Text-to-Video Generation | ['Jay Zhangjie Wu', 'Yixiao Ge', 'Xintao Wang', 'Weixian Lei', 'Yuchao Gu', 'Yufei Shi', 'Wynne Hsu', 'Ying Shan', 'Xiaohu Qie', 'Mike Zheng Shou'] | ['cs.CV'] | To replicate the success of text-to-image (T2I) generation, recent works
employ large-scale video datasets to train a text-to-video (T2V) generator.
Despite their promising results, such paradigm is computationally expensive. In
this work, we propose a new T2V generation setting$\unicode{x2014}$One-Shot
Video Tuning, w... | 2022-12-22T09:43:36Z | Preprint | null | null | null | null | null | null | null | null | null |
2,212.11613 | DDColor: Towards Photo-Realistic Image Colorization via Dual Decoders | ['Xiaoyang Kang', 'Tao Yang', 'Wenqi Ouyang', 'Peiran Ren', 'Lingzhi Li', 'Xuansong Xie'] | ['cs.CV'] | Image colorization is a challenging problem due to multi-modal uncertainty
and high ill-posedness. Directly training a deep neural network usually leads
to incorrect semantic colors and low color richness. While transformer-based
methods can deliver better results, they often rely on manually designed
priors, suffer fr... | 2022-12-22T11:17:57Z | ICCV 2023; Code: https://github.com/piddnad/DDColor | null | null | null | null | null | null | null | null | null |
2,212.11696 | Reversible Column Networks | ['Yuxuan Cai', 'Yizhuang Zhou', 'Qi Han', 'Jianjian Sun', 'Xiangwen Kong', 'Jun Li', 'Xiangyu Zhang'] | ['cs.CV'] | We propose a new neural network design paradigm Reversible Column Network
(RevCol). The main body of RevCol is composed of multiple copies of
subnetworks, named columns respectively, between which multi-level reversible
connections are employed. Such architectural scheme attributes RevCol very
different behavior from c... | 2022-12-22T13:37:59Z | Accepted by ICLR 2023 | null | null | Reversible Column Networks | ['Yuxuan Cai', 'Yizhuang Zhou', 'Qi Han', 'Jianjian Sun', 'Xiangwen Kong', 'Jun Yu Li', 'Xiangyu Zhang'] | 2,022 | International Conference on Learning Representations | 59 | 84 | ['Computer Science'] |
2,212.12017 | OPT-IML: Scaling Language Model Instruction Meta Learning through the
Lens of Generalization | ['Srinivasan Iyer', 'Xi Victoria Lin', 'Ramakanth Pasunuru', 'Todor Mihaylov', 'Daniel Simig', 'Ping Yu', 'Kurt Shuster', 'Tianlu Wang', 'Qing Liu', 'Punit Singh Koura', 'Xian Li', "Brian O'Horo", 'Gabriel Pereyra', 'Jeff Wang', 'Christopher Dewan', 'Asli Celikyilmaz', 'Luke Zettlemoyer', 'Ves Stoyanov'] | ['cs.CL'] | Recent work has shown that fine-tuning large pre-trained language models on a
collection of tasks described via instructions, a.k.a. instruction-tuning,
improves their zero and few-shot generalization to unseen tasks. However, there
is a limited understanding of the performance trade-offs of different decisions
made du... | 2022-12-22T19:56:09Z | 56 pages. v2->v3: fix OPT-30B evaluation results across benchmarks
(previously we reported lower performance of this model due to an evaluation
pipeline bug) | null | null | null | null | null | null | null | null | null |
2,212.12266 | Large Raw Emotional Dataset with Aggregation Mechanism | ['Vladimir Kondratenko', 'Artem Sokolov', 'Nikolay Karpov', 'Oleg Kutuzov', 'Nikita Savushkin', 'Fyodor Minkin'] | ['eess.AS', '62-07', 'I.2.7'] | We present a new data set for speech emotion recognition (SER) tasks called
Dusha. The corpus contains approximately 350 hours of data, more than 300 000
audio recordings with Russian speech and their transcripts. Therefore it is the
biggest open bi-modal data collection for SER task nowadays. It is annotated
using a c... | 2022-12-23T11:31:02Z | 6 pages, 1 figures, submitted to ICASSP 2023 | null | null | null | null | null | null | null | null | null |
2,212.12794 | GraphCast: Learning skillful medium-range global weather forecasting | ['Remi Lam', 'Alvaro Sanchez-Gonzalez', 'Matthew Willson', 'Peter Wirnsberger', 'Meire Fortunato', 'Ferran Alet', 'Suman Ravuri', 'Timo Ewalds', 'Zach Eaton-Rosen', 'Weihua Hu', 'Alexander Merose', 'Stephan Hoyer', 'George Holland', 'Oriol Vinyals', 'Jacklynn Stott', 'Alexander Pritzel', 'Shakir Mohamed', 'Peter Battag... | ['cs.LG', 'physics.ao-ph'] | Global medium-range weather forecasting is critical to decision-making across
many social and economic domains. Traditional numerical weather prediction uses
increased compute resources to improve forecast accuracy, but cannot directly
use historical weather data to improve the underlying model. We introduce a
machine ... | 2022-12-24T18:15:39Z | GraphCast code and trained weights are available at:
https://github.com/deepmind/graphcast | null | null | null | null | null | null | null | null | null |
2,212.13138 | Large Language Models Encode Clinical Knowledge | ['Karan Singhal', 'Shekoofeh Azizi', 'Tao Tu', 'S. Sara Mahdavi', 'Jason Wei', 'Hyung Won Chung', 'Nathan Scales', 'Ajay Tanwani', 'Heather Cole-Lewis', 'Stephen Pfohl', 'Perry Payne', 'Martin Seneviratne', 'Paul Gamble', 'Chris Kelly', 'Nathaneal Scharli', 'Aakanksha Chowdhery', 'Philip Mansfield', 'Blaise Aguera y Ar... | ['cs.CL'] | Large language models (LLMs) have demonstrated impressive capabilities in
natural language understanding and generation, but the quality bar for medical
and clinical applications is high. Today, attempts to assess models' clinical
knowledge typically rely on automated evaluations on limited benchmarks. There
is no stan... | 2022-12-26T14:28:24Z | null | null | null | Large language models encode clinical knowledge | ['K. Singhal', 'Shekoofeh Azizi', 'T. Tu', 'S. Mahdavi', 'Jason Wei', 'Hyung Won Chung', 'Nathan Scales', 'A. Tanwani', 'H. Cole-Lewis', 'S. Pfohl', 'P. Payne', 'Martin G. Seneviratne', 'P. Gamble', 'C. Kelly', 'Nathaneal Scharli', 'Aakanksha Chowdhery', 'P. A. Mansfield', 'B. A. Y. Arcas', 'D. Webster', 'Greg S. Corra... | 2,022 | Nature | 2,421 | 113 | ['Computer Science', 'Medicine'] |
2,212.14034 | Cramming: Training a Language Model on a Single GPU in One Day | ['Jonas Geiping', 'Tom Goldstein'] | ['cs.CL', 'cs.LG'] | Recent trends in language modeling have focused on increasing performance
through scaling, and have resulted in an environment where training language
models is out of reach for most researchers and practitioners. While most in
the community are asking how to push the limits of extreme computation, we ask
the opposite ... | 2022-12-28T18:59:28Z | 22 pages, we provide code at https://github.com/JonasGeiping/cramming | null | null | Cramming: Training a Language Model on a Single GPU in One Day | ['Jonas Geiping', 'T. Goldstein'] | 2,022 | International Conference on Machine Learning | 91 | 146 | ['Computer Science'] |
2,212.14052 | Hungry Hungry Hippos: Towards Language Modeling with State Space Models | ['Daniel Y. Fu', 'Tri Dao', 'Khaled K. Saab', 'Armin W. Thomas', 'Atri Rudra', 'Christopher Ré'] | ['cs.LG', 'cs.CL'] | State space models (SSMs) have demonstrated state-of-the-art sequence
modeling performance in some modalities, but underperform attention in language
modeling. Moreover, despite scaling nearly linearly in sequence length instead
of quadratically, SSMs are still slower than Transformers due to poor hardware
utilization.... | 2022-12-28T17:56:03Z | ICLR 2023 Camera-Ready (Notable-top-25% / Spotlight) | null | null | Hungry Hungry Hippos: Towards Language Modeling with State Space Models | ['Tri Dao', 'Daniel Y. Fu', 'Khaled Kamal Saab', 'A. Thomas', 'A. Rudra', 'Christopher Ré'] | 2,022 | International Conference on Learning Representations | 406 | 65 | ['Computer Science'] |
2,212.14532 | Scale-MAE: A Scale-Aware Masked Autoencoder for Multiscale Geospatial
Representation Learning | ['Colorado J. Reed', 'Ritwik Gupta', 'Shufan Li', 'Sarah Brockman', 'Christopher Funk', 'Brian Clipp', 'Kurt Keutzer', 'Salvatore Candido', 'Matt Uyttendaele', 'Trevor Darrell'] | ['cs.CV'] | Large, pretrained models are commonly finetuned with imagery that is heavily
augmented to mimic different conditions and scales, with the resulting models
used for various tasks with imagery from a range of spatial scales. Such models
overlook scale-specific information in the data for scale-dependent domains,
such as ... | 2022-12-30T03:15:34Z | International Conference on Computer Vision 2023 | null | null | Scale-MAE: A Scale-Aware Masked Autoencoder for Multiscale Geospatial Representation Learning | ['Colorado Reed', 'Ritwik Gupta', 'Shufan Li', 'S. Brockman', 'Christopher Funk', 'Brian Clipp', 'Salvatore Candido', 'M. Uyttendaele', 'Trevor Darrell'] | 2,022 | IEEE International Conference on Computer Vision | 193 | 68 | ['Computer Science'] |
2,301.00234 | A Survey on In-context Learning | ['Qingxiu Dong', 'Lei Li', 'Damai Dai', 'Ce Zheng', 'Jingyuan Ma', 'Rui Li', 'Heming Xia', 'Jingjing Xu', 'Zhiyong Wu', 'Tianyu Liu', 'Baobao Chang', 'Xu Sun', 'Lei Li', 'Zhifang Sui'] | ['cs.CL', 'cs.AI'] | With the increasing capabilities of large language models (LLMs), in-context
learning (ICL) has emerged as a new paradigm for natural language processing
(NLP), where LLMs make predictions based on contexts augmented with a few
examples. It has been a significant trend to explore ICL to evaluate and
extrapolate the abi... | 2022-12-31T15:57:09Z | Update | null | null | null | null | null | null | null | null | null |
2,301.00704 | Muse: Text-To-Image Generation via Masked Generative Transformers | ['Huiwen Chang', 'Han Zhang', 'Jarred Barber', 'AJ Maschinot', 'Jose Lezama', 'Lu Jiang', 'Ming-Hsuan Yang', 'Kevin Murphy', 'William T. Freeman', 'Michael Rubinstein', 'Yuanzhen Li', 'Dilip Krishnan'] | ['cs.CV', 'cs.AI', 'cs.LG'] | We present Muse, a text-to-image Transformer model that achieves
state-of-the-art image generation performance while being significantly more
efficient than diffusion or autoregressive models. Muse is trained on a masked
modeling task in discrete token space: given the text embedding extracted from
a pre-trained large ... | 2023-01-02T14:43:38Z | null | null | null | Muse: Text-To-Image Generation via Masked Generative Transformers | ['Huiwen Chang', 'Han Zhang', 'Jarred Barber', 'AJ Maschinot', 'José Lezama', 'Lu Jiang', 'Ming Yang', 'K. Murphy', 'W. Freeman', 'Michael Rubinstein', 'Yuanzhen Li', 'Dilip Krishnan'] | 2,023 | International Conference on Machine Learning | 560 | 87 | ['Computer Science'] |
2,301.00769 | Sharp norm estimates for the classical heat equation | ['Erik Talvila'] | ['math.AP', '35K05, 46E30 (Primary) 26A42 (Secondary)'] | Sharp estimates of solutions of the classical heat equation are proved in
$L^p$ norms on the real line. | 2023-01-02T17:40:53Z | null | null | null | null | null | null | null | null | null | null |
2,301.00774 | SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot | ['Elias Frantar', 'Dan Alistarh'] | ['cs.LG'] | We show for the first time that large-scale generative pretrained transformer
(GPT) family models can be pruned to at least 50% sparsity in one-shot, without
any retraining, at minimal loss of accuracy. This is achieved via a new pruning
method called SparseGPT, specifically designed to work efficiently and
accurately ... | 2023-01-02T17:48:56Z | null | null | null | SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot | ['Elias Frantar', 'Dan Alistarh'] | 2,023 | International Conference on Machine Learning | 739 | 56 | ['Computer Science'] |
2,301.00808 | ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders | ['Sanghyun Woo', 'Shoubhik Debnath', 'Ronghang Hu', 'Xinlei Chen', 'Zhuang Liu', 'In So Kweon', 'Saining Xie'] | ['cs.CV'] | Driven by improved architectures and better representation learning
frameworks, the field of visual recognition has enjoyed rapid modernization and
performance boost in the early 2020s. For example, modern ConvNets, represented
by ConvNeXt, have demonstrated strong performance in various scenarios. While
these models w... | 2023-01-02T18:59:31Z | Code and models available at
https://github.com/facebookresearch/ConvNeXt-V2 | null | null | null | null | null | null | null | null | null |
2,301.00876 | MAUD: An Expert-Annotated Legal NLP Dataset for Merger Agreement
Understanding | ['Steven H. Wang', 'Antoine Scardigli', 'Leonard Tang', 'Wei Chen', 'Dimitry Levkin', 'Anya Chen', 'Spencer Ball', 'Thomas Woodside', 'Oliver Zhang', 'Dan Hendrycks'] | ['cs.CL'] | Reading comprehension of legal text can be a particularly challenging task
due to the length and complexity of legal clauses and a shortage of
expert-annotated datasets. To address this challenge, we introduce the Merger
Agreement Understanding Dataset (MAUD), an expert-annotated reading
comprehension dataset based on ... | 2023-01-02T21:08:27Z | EMNLP 2023. 5 pages + appendix. Code and dataset are available at
https://github.com/TheAtticusProject/maud | null | null | MAUD: An Expert-Annotated Legal NLP Dataset for Merger Agreement Understanding | ['Steven H. Wang', 'Antoine Scardigli', 'Leonard Tang', 'Wei Chen', 'D.M. Levkin', 'Anya Chen', 'Spencer Ball', 'Thomas Woodside', 'Oliver Zhang', 'Dan Hendrycks'] | 2,023 | Conference on Empirical Methods in Natural Language Processing | 22 | 28 | ['Computer Science'] |
2,301.01081 | StyleTalk: One-shot Talking Head Generation with Controllable Speaking
Styles | ['Yifeng Ma', 'Suzhen Wang', 'Zhipeng Hu', 'Changjie Fan', 'Tangjie Lv', 'Yu Ding', 'Zhidong Deng', 'Xin Yu'] | ['cs.CV'] | Different people speak with diverse personalized speaking styles. Although
existing one-shot talking head methods have made significant progress in lip
sync, natural facial expressions, and stable head motions, they still cannot
generate diverse speaking styles in the final talking head videos. To tackle
this problem, ... | 2023-01-03T13:16:24Z | Accepted at AAAI2023 as Oral. Demo: https://youtu.be/mO2Tjcwr4u8 | null | null | StyleTalk: One-shot Talking Head Generation with Controllable Speaking Styles | ['Yifeng Ma', 'Suzhe Wang', 'Zhipeng Hu', 'Changjie Fan', 'Tangjie Lv', 'Yu Ding', 'Zhidong Deng', 'Xin Yu'] | 2,023 | AAAI Conference on Artificial Intelligence | 89 | 60 | ['Computer Science'] |
2,301.01701 | Extending Source Code Pre-Trained Language Models to Summarise
Decompiled Binaries | ['Ali Al-Kaswan', 'Toufique Ahmed', 'Maliheh Izadi', 'Anand Ashok Sawant', 'Premkumar Devanbu', 'Arie van Deursen'] | ['cs.CR', 'cs.AI', 'cs.LG', 'cs.SE'] | Reverse engineering binaries is required to understand and analyse programs
for which the source code is unavailable. Decompilers can transform the largely
unreadable binaries into a more readable source code-like representation.
However, reverse engineering is time-consuming, much of which is taken up by
labelling the... | 2023-01-04T16:56:33Z | SANER 2023 Technical Track Camera Ready | null | null | Extending Source Code Pre-Trained Language Models to Summarise Decompiled Binarie | ['Ali Al-Kaswan', 'Toufique Ahmed', 'M. Izadi', 'A. Sawant', 'Prem Devanbu', 'A. Deursen'] | 2,023 | IEEE International Conference on Software Analysis, Evolution, and Reengineering | 37 | 47 | ['Computer Science'] |
2,301.0182 | InPars-v2: Large Language Models as Efficient Dataset Generators for
Information Retrieval | ['Vitor Jeronymo', 'Luiz Bonifacio', 'Hugo Abonizio', 'Marzieh Fadaee', 'Roberto Lotufo', 'Jakub Zavrel', 'Rodrigo Nogueira'] | ['cs.IR', 'cs.AI'] | Recently, InPars introduced a method to efficiently use large language models
(LLMs) in information retrieval tasks: via few-shot examples, an LLM is induced
to generate relevant queries for documents. These synthetic query-document
pairs can then be used to train a retriever. However, InPars and, more
recently, Prompt... | 2023-01-04T20:58:43Z | null | null | null | InPars-v2: Large Language Models as Efficient Dataset Generators for Information Retrieval | ['Vitor Jeronymo', 'L. Bonifacio', 'H. Abonizio', 'Marzieh Fadaee', 'R. Lotufo', 'Jakub Zavrel', 'Rodrigo Nogueira'] | 2,023 | arXiv.org | 96 | 10 | ['Computer Science'] |
2,301.02111 | Neural Codec Language Models are Zero-Shot Text to Speech Synthesizers | ['Chengyi Wang', 'Sanyuan Chen', 'Yu Wu', 'Ziqiang Zhang', 'Long Zhou', 'Shujie Liu', 'Zhuo Chen', 'Yanqing Liu', 'Huaming Wang', 'Jinyu Li', 'Lei He', 'Sheng Zhao', 'Furu Wei'] | ['cs.CL', 'cs.SD', 'eess.AS'] | We introduce a language modeling approach for text to speech synthesis (TTS).
Specifically, we train a neural codec language model (called Vall-E) using
discrete codes derived from an off-the-shelf neural audio codec model, and
regard TTS as a conditional language modeling task rather than continuous
signal regression ... | 2023-01-05T15:37:15Z | Working in progress | null | null | null | null | null | null | null | null | null |
2,301.02228 | MedKLIP: Medical Knowledge Enhanced Language-Image Pre-Training in
Radiology | ['Chaoyi Wu', 'Xiaoman Zhang', 'Ya Zhang', 'Yanfeng Wang', 'Weidi Xie'] | ['eess.IV', 'cs.CL', 'cs.CV'] | In this paper, we consider enhancing medical visual-language pre-training
(VLP) with domain-specific knowledge, by exploiting the paired image-text
reports from the radiological daily practice. In particular, we make the
following contributions: First, unlike existing works that directly process the
raw reports, we ado... | 2023-01-05T18:55:09Z | null | null | null | MedKLIP: Medical Knowledge Enhanced Language-Image Pre-Training for X-ray Diagnosis | ['Chaoyi Wu', 'Xiaoman Zhang', 'Ya Zhang', 'Yanfeng Wang', 'Weidi Xie'] | 2,023 | IEEE International Conference on Computer Vision | 121 | 74 | ['Engineering', 'Computer Science', 'Medicine'] |
2,301.02884 | TunesFormer: Forming Irish Tunes with Control Codes by Bar Patching | ['Shangda Wu', 'Xiaobing Li', 'Feng Yu', 'Maosong Sun'] | ['cs.SD', 'eess.AS'] | This paper introduces TunesFormer, an efficient Transformer-based
dual-decoder model specifically designed for the generation of melodies that
adhere to user-defined musical forms. Trained on 214,122 Irish tunes,
TunesFormer utilizes techniques including bar patching and control codes. Bar
patching reduces sequence len... | 2023-01-07T16:11:55Z | 6 pages, 1 figure, 1 table, accepted by HCMIR 2023 | null | null | null | null | null | null | null | null | null |
2,301.0311 | RobArch: Designing Robust Architectures against Adversarial Attacks | ['ShengYun Peng', 'Weilin Xu', 'Cory Cornelius', 'Kevin Li', 'Rahul Duggal', 'Duen Horng Chau', 'Jason Martin'] | ['cs.CV', 'cs.AI'] | Adversarial Training is the most effective approach for improving the
robustness of Deep Neural Networks (DNNs). However, compared to the large body
of research in optimizing the adversarial training process, there are few
investigations into how architecture components affect robustness, and they
rarely constrain mode... | 2023-01-08T21:19:52Z | null | null | null | null | null | null | null | null | null | null |
2,301.03136 | Removing Non-Stationary Knowledge From Pre-Trained Language Models for
Entity-Level Sentiment Classification in Finance | ['Guijin Son', 'Hanwool Lee', 'Nahyeon Kang', 'Moonjeong Hahm'] | ['cs.CL', 'cs.LG', 'q-fin.GN'] | Extraction of sentiment signals from news text, stock message boards, and
business reports, for stock movement prediction, has been a rising field of
interest in finance. Building upon past literature, the most recent works
attempt to better capture sentiment from sentences with complex syntactic
structures by introduc... | 2023-01-09T01:26:55Z | Published at The AAAI-2023 Workshop On Multimodal AI For Financial
Forecasting (muffin@AAAI2023) | null | null | null | null | null | null | null | null | null |
2,301.0315 | MOTOR: A Time-To-Event Foundation Model For Structured Medical Records | ['Ethan Steinberg', 'Jason Fries', 'Yizhe Xu', 'Nigam Shah'] | ['cs.LG'] | We present a self-supervised, time-to-event (TTE) foundation model called
MOTOR (Many Outcome Time Oriented Representations) which is pretrained on
timestamped sequences of events in electronic health records (EHR) and health
insurance claims. TTE models are used for estimating the probability
distribution of the time ... | 2023-01-09T02:42:39Z | null | null | null | null | null | null | null | null | null | null |
2,301.03319 | FullStop:Punctuation and Segmentation Prediction for Dutch with
Transformers | ['Vincent Vandeghinste', 'Oliver Guhr'] | ['cs.CL', 'cs.AI', 'I.2.7'] | When applying automated speech recognition (ASR) for Belgian Dutch (Van Dyck
et al. 2021), the output consists of an unsegmented stream of words, without
any punctuation. A next step is to perform segmentation and insert punctuation,
making the ASR output more readable and easy to manually correct. As far as we
know th... | 2023-01-09T13:12:05Z | 18 pages | null | null | FullStop: Punctuation and Segmentation Prediction for Dutch with Transformers | ['Vincent Vandeghinste', 'Oliver Guhr'] | 2,023 | Language Resources and Evaluation | 6 | 36 | ['Computer Science'] |
2,301.03403 | A comprehensive review of automatic text summarization techniques:
method, data, evaluation and coding | ['Daniel O. Cajueiro', 'Arthur G. Nery', 'Igor Tavares', 'Maísa K. De Melo', 'Silvia A. dos Reis', 'Li Weigang', 'Victor R. R. Celestino'] | ['cs.CL', 'cs.LG'] | We provide a literature review about Automatic Text Summarization (ATS)
systems. We consider a citation-based approach. We start with some popular and
well-known papers that we have in hand about each topic we want to cover and we
have tracked the "backward citations" (papers that are cited by the set of
papers we knew... | 2023-01-04T19:20:18Z | null | null | null | null | null | null | null | null | null | null |
2,301.03988 | SantaCoder: don't reach for the stars! | ['Loubna Ben Allal', 'Raymond Li', 'Denis Kocetkov', 'Chenghao Mou', 'Christopher Akiki', 'Carlos Munoz Ferrandis', 'Niklas Muennighoff', 'Mayank Mishra', 'Alex Gu', 'Manan Dey', 'Logesh Kumar Umapathi', 'Carolyn Jane Anderson', 'Yangtian Zi', 'Joel Lamy Poirier', 'Hailey Schoelkopf', 'Sergey Troshin', 'Dmitry Abulkhan... | ['cs.SE', 'cs.AI', 'cs.LG'] | The BigCode project is an open-scientific collaboration working on the
responsible development of large language models for code. This tech report
describes the progress of the collaboration until December 2022, outlining the
current state of the Personally Identifiable Information (PII) redaction
pipeline, the experim... | 2023-01-09T10:52:35Z | null | null | null | SantaCoder: don't reach for the stars! | ['Loubna Ben Allal', 'Raymond Li', 'Denis Kocetkov', 'Chenghao Mou', 'Christopher Akiki', 'Carlos Muñoz Ferrandis', 'Niklas Muennighoff', 'Mayank Mishra', 'A. Gu', 'Manan Dey', 'Logesh Kumar Umapathi', 'Carolyn Jane Anderson', 'Yangtian Zi', 'J. Poirier', 'Hailey Schoelkopf', 'S. Troshin', 'Dmitry Abulkhanov', 'M. Rome... | 2,023 | arXiv.org | 200 | 47 | ['Computer Science'] |
2,301.04558 | Learning to Exploit Temporal Structure for Biomedical Vision-Language
Processing | ['Shruthi Bannur', 'Stephanie Hyland', 'Qianchu Liu', 'Fernando Pérez-García', 'Maximilian Ilse', 'Daniel C. Castro', 'Benedikt Boecking', 'Harshita Sharma', 'Kenza Bouzid', 'Anja Thieme', 'Anton Schwaighofer', 'Maria Wetscherek', 'Matthew P. Lungren', 'Aditya Nori', 'Javier Alvarez-Valle', 'Ozan Oktay'] | ['cs.CV', 'cs.CL'] | Self-supervised learning in vision-language processing exploits semantic
alignment between imaging and text modalities. Prior work in biomedical VLP has
mostly relied on the alignment of single image and report pairs even though
clinical notes commonly refer to prior images. This does not only introduce
poor alignment ... | 2023-01-11T16:35:33Z | To appear in CVPR 2023 | null | null | null | null | null | null | null | null | null |
2,301.04883 | SlideVQA: A Dataset for Document Visual Question Answering on Multiple
Images | ['Ryota Tanaka', 'Kyosuke Nishida', 'Kosuke Nishida', 'Taku Hasegawa', 'Itsumi Saito', 'Kuniko Saito'] | ['cs.CL', 'cs.CV'] | Visual question answering on document images that contain textual, visual,
and layout information, called document VQA, has received much attention
recently. Although many datasets have been proposed for developing document VQA
systems, most of the existing datasets focus on understanding the content
relationships with... | 2023-01-12T09:00:42Z | Accepted by AAAI2023 | null | null | null | null | null | null | null | null | null |
2,301.05225 | Domain Expansion of Image Generators | ['Yotam Nitzan', 'Michaël Gharbi', 'Richard Zhang', 'Taesung Park', 'Jun-Yan Zhu', 'Daniel Cohen-Or', 'Eli Shechtman'] | ['cs.CV', 'cs.GR', 'cs.LG'] | Can one inject new concepts into an already trained generative model, while
respecting its existing structure and knowledge? We propose a new task - domain
expansion - to address this. Given a pretrained generator and novel (but
related) domains, we expand the generator to jointly model all domains, old and
new, harmon... | 2023-01-12T18:59:47Z | Project Page and code are available at
https://yotamnitzan.github.io/domain-expansion/. CVPR 2023 Camera-Ready | null | null | null | null | null | null | null | null | null |
2,301.05586 | YOLOv6 v3.0: A Full-Scale Reloading | ['Chuyi Li', 'Lulu Li', 'Yifei Geng', 'Hongliang Jiang', 'Meng Cheng', 'Bo Zhang', 'Zaidan Ke', 'Xiaoming Xu', 'Xiangxiang Chu'] | ['cs.CV'] | The YOLO community has been in high spirits since our first two releases! By
the advent of Chinese New Year 2023, which sees the Year of the Rabbit, we
refurnish YOLOv6 with numerous novel enhancements on the network architecture
and the training scheme. This release is identified as YOLOv6 v3.0. For a
glimpse of perfo... | 2023-01-13T14:46:46Z | Tech Report. arXiv admin note: text overlap with arXiv:2209.02976 | null | null | null | null | null | null | null | null | null |
2,301.05948 | tasksource: A Dataset Harmonization Framework for Streamlined NLP
Multi-Task Learning and Evaluation | ['Damien Sileo'] | ['cs.CL', 'cs.AI', 'I.2.7'] | The HuggingFace Datasets Hub hosts thousands of datasets, offering exciting
opportunities for language model training and evaluation. However, datasets for
a specific task type often have different schemas, making harmonization
challenging. Multi-task training or evaluation necessitates manual work to fit
data into tas... | 2023-01-14T16:38:04Z | null | null | null | tasksource: A Dataset Harmonization Framework for Streamlined NLP Multi-Task Learning and Evaluation | ['Damien Sileo'] | 2,023 | null | 11 | 29 | ['Computer Science'] |
2,301.06051 | DSVT: Dynamic Sparse Voxel Transformer with Rotated Sets | ['Haiyang Wang', 'Chen Shi', 'Shaoshuai Shi', 'Meng Lei', 'Sen Wang', 'Di He', 'Bernt Schiele', 'Liwei Wang'] | ['cs.CV'] | Designing an efficient yet deployment-friendly 3D backbone to handle sparse
point clouds is a fundamental problem in 3D perception. Compared with the
customized sparse convolution, the attention mechanism in Transformers is more
appropriate for flexibly modeling long-range relationships and is easier to be
deployed in ... | 2023-01-15T09:31:58Z | Accepted by CVPR2023 | null | null | DSVT: Dynamic Sparse Voxel Transformer with Rotated Sets | ['Haiyang Wang', 'Chen Shi', 'Shaoshuai Shi', 'Meng Lei', 'Sen Wang', 'Di He', 'B. Schiele', 'Liwei Wang'] | 2,023 | Computer Vision and Pattern Recognition | 122 | 61 | ['Computer Science'] |
2,301.06052 | T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete
Representations | ['Jianrong Zhang', 'Yangsong Zhang', 'Xiaodong Cun', 'Shaoli Huang', 'Yong Zhang', 'Hongwei Zhao', 'Hongtao Lu', 'Xi Shen'] | ['cs.CV'] | In this work, we investigate a simple and must-known conditional generative
framework based on Vector Quantised-Variational AutoEncoder (VQ-VAE) and
Generative Pre-trained Transformer (GPT) for human motion generation from
textural descriptions. We show that a simple CNN-based VQ-VAE with commonly
used training recipes... | 2023-01-15T09:34:42Z | Accepted to CVPR 2023. Project page:
https://mael-zys.github.io/T2M-GPT/ | null | null | null | null | null | null | null | null | null |
2,301.06323 | An Error-Guided Correction Model for Chinese Spelling Error Correction | ['Rui Sun', 'Xiuyu Wu', 'Yunfang Wu'] | ['cs.CL', 'cs.AI'] | Although existing neural network approaches have achieved great success on
Chinese spelling correction, there is still room to improve. The model is
required to avoid over-correction and to distinguish a correct token from its
phonological and visually similar ones. In this paper, we propose an
error-guided correction ... | 2023-01-16T09:27:45Z | null | null | null | An Error-Guided Correction Model for Chinese Spelling Error Correction | ['Ruiyong Sun', 'Xiuyu Wu', 'Yunfang Wu'] | 2,023 | Conference on Empirical Methods in Natural Language Processing | 10 | 33 | ['Computer Science'] |
2,301.06568 | Ankh: Optimized Protein Language Model Unlocks General-Purpose Modelling | ['Ahmed Elnaggar', 'Hazem Essam', 'Wafaa Salah-Eldin', 'Walid Moustafa', 'Mohamed Elkerdawy', 'Charlotte Rochereau', 'Burkhard Rost'] | ['cs.LG', 'cs.CL', 'cs.DC', 'q-bio.QM'] | As opposed to scaling-up protein language models (PLMs), we seek improving
performance via protein-specific optimization. Although the proportionality
between the language model size and the richness of its learned representations
is validated, we prioritize accessibility and pursue a path of data-efficient,
cost-reduc... | 2023-01-16T19:04:45Z | 29 pages, 6 figures | null | null | null | null | null | null | null | null | null |
2,301.07093 | GLIGEN: Open-Set Grounded Text-to-Image Generation | ['Yuheng Li', 'Haotian Liu', 'Qingyang Wu', 'Fangzhou Mu', 'Jianwei Yang', 'Jianfeng Gao', 'Chunyuan Li', 'Yong Jae Lee'] | ['cs.CV', 'cs.AI', 'cs.CL', 'cs.GR', 'cs.LG'] | Large-scale text-to-image diffusion models have made amazing advances.
However, the status quo is to use text input alone, which can impede
controllability. In this work, we propose GLIGEN, Grounded-Language-to-Image
Generation, a novel approach that builds upon and extends the functionality of
existing pre-trained tex... | 2023-01-17T18:58:58Z | null | null | null | null | null | null | null | null | null | null |
2,301.07295 | Adapting Multilingual Speech Representation Model for a New,
Underresourced Language through Multilingual Fine-tuning and Continued
Pretraining | ['Karol Nowakowski', 'Michal Ptaszynski', 'Kyoko Murasaki', 'Jagna Nieuważny'] | ['cs.CL', 'cs.LG', 'eess.AS'] | In recent years, neural models learned through self-supervised pretraining on
large scale multilingual text or speech data have exhibited promising results
for underresourced languages, especially when a relatively large amount of data
from related language(s) is available. While the technology has a potential for
faci... | 2023-01-18T03:57:53Z | 14 pages | Information Processing & Management, Volume 60, Issue 2, March
2023, 103148, ISSN 0306-4573 | 10.1016/j.ipm.2022.103148 | null | null | null | null | null | null | null |
2,301.07507 | Graphix-T5: Mixing Pre-Trained Transformers with Graph-Aware Layers for
Text-to-SQL Parsing | ['Jinyang Li', 'Binyuan Hui', 'Reynold Cheng', 'Bowen Qin', 'Chenhao Ma', 'Nan Huo', 'Fei Huang', 'Wenyu Du', 'Luo Si', 'Yongbin Li'] | ['cs.CL', 'cs.DB'] | The task of text-to-SQL parsing, which aims at converting natural language
questions into executable SQL queries, has garnered increasing attention in
recent years, as it can assist end users in efficiently extracting vital
information from databases without the need for technical background. One of
the major challenge... | 2023-01-18T13:29:05Z | Accepted to AAAI 2023 main conference (oral) | null | null | Graphix-T5: Mixing Pre-Trained Transformers with Graph-Aware Layers for Text-to-SQL Parsing | ['Jinyang Li', 'Binyuan Hui', 'Reynold Cheng', 'Bowen Qin', 'Chenhao Ma', 'Nan Huo', 'Fei Huang', 'Wenyu Du', 'Luo Si', 'Yongbin Li'] | 2,023 | AAAI Conference on Artificial Intelligence | 115 | 50 | ['Computer Science'] |
2,301.07597 | How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation,
and Detection | ['Biyang Guo', 'Xin Zhang', 'Ziyuan Wang', 'Minqi Jiang', 'Jinran Nie', 'Yuxuan Ding', 'Jianwei Yue', 'Yupeng Wu'] | ['cs.CL'] | The introduction of ChatGPT has garnered widespread attention in both
academic and industrial communities. ChatGPT is able to respond effectively to
a wide range of human questions, providing fluent and comprehensive answers
that significantly surpass previous public chatbots in terms of security and
usefulness. On one... | 2023-01-18T15:23:25Z | https://github.com/Hello-SimpleAI/chatgpt-comparison-detection | null | null | How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection | ['Biyang Guo', 'Xin Zhang', 'Ziyuan Wang', 'Minqi Jiang', 'Jinran Nie', 'Yuxuan Ding', 'Jianwei Yue', 'Yupeng Wu'] | 2,023 | arXiv.org | 622 | 48 | ['Computer Science'] |
2,301.08237 | LoCoNet: Long-Short Context Network for Active Speaker Detection | ['Xizi Wang', 'Feng Cheng', 'Gedas Bertasius', 'David Crandall'] | ['cs.CV'] | Active Speaker Detection (ASD) aims to identify who is speaking in each frame
of a video. ASD reasons from audio and visual information from two contexts:
long-term intra-speaker context and short-term inter-speaker context. Long-term
intra-speaker context models the temporal dependencies of the same speaker,
while sho... | 2023-01-19T18:54:43Z | accepted by CVPR 2024 | null | null | null | null | null | null | null | null | null |
2,301.08243 | Self-Supervised Learning from Images with a Joint-Embedding Predictive
Architecture | ['Mahmoud Assran', 'Quentin Duval', 'Ishan Misra', 'Piotr Bojanowski', 'Pascal Vincent', 'Michael Rabbat', 'Yann LeCun', 'Nicolas Ballas'] | ['cs.CV', 'cs.AI', 'cs.LG', 'eess.IV'] | This paper demonstrates an approach for learning highly semantic image
representations without relying on hand-crafted data-augmentations. We
introduce the Image-based Joint-Embedding Predictive Architecture (I-JEPA), a
non-generative approach for self-supervised learning from images. The idea
behind I-JEPA is simple: ... | 2023-01-19T18:59:01Z | 2023 IEEE/CVF International Conference on Computer Vision | null | null | null | null | null | null | null | null | null |
2,301.08247 | Multiview Compressive Coding for 3D Reconstruction | ['Chao-Yuan Wu', 'Justin Johnson', 'Jitendra Malik', 'Christoph Feichtenhofer', 'Georgia Gkioxari'] | ['cs.CV'] | A central goal of visual recognition is to understand objects and scenes from
a single image. 2D recognition has witnessed tremendous progress thanks to
large-scale learning and general-purpose representations. Comparatively, 3D
poses new challenges stemming from occlusions not depicted in the image. Prior
works try to... | 2023-01-19T18:59:52Z | Project page: https://mcc3d.github.io/ | null | null | Multiview Compressive Coding for 3D Reconstruction | ['Chaozheng Wu', 'Justin Johnson', 'J. Malik', 'Christoph Feichtenhofer', 'Georgia Gkioxari'] | 2,023 | Computer Vision and Pattern Recognition | 75 | 87 | ['Computer Science'] |
2,301.08784 | Visual Semantic Relatedness Dataset for Image Captioning | ['Ahmed Sabir', 'Francesc Moreno-Noguer', 'Lluís Padró'] | ['cs.CL', 'cs.CV'] | Modern image captioning system relies heavily on extracting knowledge from
images to capture the concept of a static story. In this paper, we propose a
textual visual context dataset for captioning, in which the publicly available
dataset COCO Captions (Lin et al., 2014) has been extended with information
about the sce... | 2023-01-20T20:04:35Z | Project Page: bit.ly/project-page-paper | null | null | Visual Semantic Relatedness Dataset for Image Captioning | ['Ahmed Sabir', 'F. Moreno-Noguer', "Llu'is Padr'o"] | 2,023 | 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) | 3 | 49 | ['Computer Science'] |
2,301.0881 | Phoneme-Level BERT for Enhanced Prosody of Text-to-Speech with Grapheme
Predictions | ['Yinghao Aaron Li', 'Cong Han', 'Xilin Jiang', 'Nima Mesgarani'] | ['cs.CL', 'cs.SD', 'eess.AS'] | Large-scale pre-trained language models have been shown to be helpful in
improving the naturalness of text-to-speech (TTS) models by enabling them to
produce more naturalistic prosodic patterns. However, these models are usually
word-level or sup-phoneme-level and jointly trained with phonemes, making them
inefficient ... | 2023-01-20T21:36:16Z | null | null | null | null | null | null | null | null | null | null |
2,301.09123 | Face Generation from Textual Features using Conditionally Trained Inputs
to Generative Adversarial Networks | ['Sandeep Shinde', 'Tejas Pradhan', 'Aniket Ghorpade', 'Mihir Tale'] | ['cs.CV', 'cs.AI'] | Generative Networks have proved to be extremely effective in image
restoration and reconstruction in the past few years. Generating faces from
textual descriptions is one such application where the power of generative
algorithms can be used. The task of generating faces can be useful for a number
of applications such a... | 2023-01-22T13:27:12Z | null | null | null | null | null | null | null | null | null | null |
2,301.09626 | Efficient Language Model Training through Cross-Lingual and Progressive
Transfer Learning | ['Malte Ostendorff', 'Georg Rehm'] | ['cs.CL', 'cs.AI'] | Most Transformer language models are primarily pretrained on English text,
limiting their use for other languages. As the model sizes grow, the
performance gap between English and other languages with fewer compute and data
resources increases even further. Consequently, more resource-efficient
training methods are nee... | 2023-01-23T18:56:12Z | null | null | null | Efficient Language Model Training through Cross-Lingual and Progressive Transfer Learning | ['Malte Ostendorff', 'Georg Rehm'] | 2,023 | arXiv.org | 28 | 61 | ['Computer Science'] |
2,301.10226 | A Watermark for Large Language Models | ['John Kirchenbauer', 'Jonas Geiping', 'Yuxin Wen', 'Jonathan Katz', 'Ian Miers', 'Tom Goldstein'] | ['cs.LG', 'cs.CL', 'cs.CR'] | Potential harms of large language models can be mitigated by watermarking
model output, i.e., embedding signals into generated text that are invisible to
humans but algorithmically detectable from a short span of tokens. We propose a
watermarking framework for proprietary language models. The watermark can be
embedded ... | 2023-01-24T18:52:59Z | 13 pages in the main body. Published at ICML 2023. Code is available
at github.com/jwkirchenbauer/lm-watermarking | null | null | A Watermark for Large Language Models | ['John Kirchenbauer', 'Jonas Geiping', 'Yuxin Wen', 'Jonathan Katz', 'Ian Miers', 'T. Goldstein'] | 2,023 | International Conference on Machine Learning | 511 | 64 | ['Computer Science'] |
2,301.10345 | LuSEE 'Night': The Lunar Surface Electromagnetics Experiment | ['Stuart D. Bale', 'Neil Bassett', 'Jack O. Burns', 'Johnny Dorigo Jones', 'Keith Goetz', 'Christian Hellum-Bye', 'Sven Hermann', 'Joshua Hibbard', 'Milan Maksimovic', 'Ryan McLean', 'Raul Monsalve', "Paul O'Connor", 'Aaron Parsons', 'Marc Pulupa', 'Rugved Pund', 'David Rapetti', 'Kaja M. Rotermund', 'Ben Saliwanchik',... | ['astro-ph.IM', 'astro-ph.EP', 'astro-ph.GA', 'astro-ph.SR'] | The Lunar Surface Electromagnetics Explorer 'LuSEE Night' is a low frequency
radio astronomy experiment that will be delivered to the farside of the Moon by
the NASA Commercial Lunar Payload Services (CLPS) program in late 2025 or early
2026. The payload system is being developed jointly by NASA and the US
Department o... | 2023-01-24T23:23:04Z | summary paper submitted to URSI GASS 2023 | null | null | LuSEE 'Night': The Lunar Surface Electromagnetics Experiment | ['S. Bale', 'N. Bassett', 'Jack O. Burns', 'John Dorigo Jones', 'K. Goetz', 'Christian Hellum-Bye', 'Sven Hermann', 'J. Hibbard', 'M. Maksimović', 'Ryan McLean', 'Raul Monsalve', 'Paul O’Connor', 'A. Parsons', 'M. Pulupa', 'Rugved Pund', 'D. Rapetti', 'K. Rotermund', 'B. Saliwanchik', 'A. Slosar', 'D. Sundkvist', 'A. S... | 2,023 | null | 20 | 1 | ['Physics'] |
2,301.10405 | Editing Language Model-based Knowledge Graph Embeddings | ['Siyuan Cheng', 'Ningyu Zhang', 'Bozhong Tian', 'Xi Chen', 'Qingbing Liu', 'Huajun Chen'] | ['cs.CL', 'cs.AI', 'cs.DB', 'cs.IR', 'cs.LG'] | Recently decades have witnessed the empirical success of framing Knowledge
Graph (KG) embeddings via language models. However, language model-based KG
embeddings are usually deployed as static artifacts, making them difficult to
modify post-deployment without re-training after deployment. To address this
issue, we prop... | 2023-01-25T04:45:06Z | AAAI 2024. The project website is
https://zjunlp.github.io/project/KGE_Editing/ | null | null | null | null | null | null | null | null | null |
2,301.10472 | XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked
Language Models | ['Davis Liang', 'Hila Gonen', 'Yuning Mao', 'Rui Hou', 'Naman Goyal', 'Marjan Ghazvininejad', 'Luke Zettlemoyer', 'Madian Khabsa'] | ['cs.CL', 'cs.LG'] | Large multilingual language models typically rely on a single vocabulary
shared across 100+ languages. As these models have increased in parameter count
and depth, vocabulary size has remained largely unchanged. This
\textit{vocabulary bottleneck} limits the representational capabilities of
multilingual models like XLM... | 2023-01-25T09:15:17Z | EMNLP 2023 | null | null | XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models | ['Davis Liang', 'Hila Gonen', 'Yuning Mao', 'Rui Hou', 'Naman Goyal', 'Marjan Ghazvininejad', 'Luke Zettlemoyer', 'Madian Khabsa'] | 2,023 | Conference on Empirical Methods in Natural Language Processing | 80 | 37 | ['Computer Science'] |
2,301.10527 | Cross-lingual Argument Mining in the Medical Domain | ['Anar Yeginbergen', 'Rodrigo Agerri'] | ['cs.CL'] | Nowadays the medical domain is receiving more and more attention in
applications involving Artificial Intelligence as clinicians decision-making is
increasingly dependent on dealing with enormous amounts of unstructured textual
data. In this context, Argument Mining (AM) helps to meaningfully structure
textual data by ... | 2023-01-25T11:21:12Z | null | Procesamiento del Lenguaje Natural vol 73, 2024 | null | null | null | null | null | null | null | null |
2,301.11093 | Simple diffusion: End-to-end diffusion for high resolution images | ['Emiel Hoogeboom', 'Jonathan Heek', 'Tim Salimans'] | ['cs.CV', 'cs.LG', 'stat.ML'] | Currently, applying diffusion models in pixel space of high resolution images
is difficult. Instead, existing approaches focus on diffusion in lower
dimensional spaces (latent diffusion), or have multiple super-resolution levels
of generation referred to as cascades. The downside is that these approaches
add additional... | 2023-01-26T13:35:02Z | null | null | null | simple diffusion: End-to-end diffusion for high resolution images | ['Emiel Hoogeboom', 'J. Heek', 'Tim Salimans'] | 2,023 | International Conference on Machine Learning | 268 | 32 | ['Computer Science', 'Mathematics'] |
2,301.11259 | Domain-Agnostic Molecular Generation with Chemical Feedback | ['Yin Fang', 'Ningyu Zhang', 'Zhuo Chen', 'Lingbing Guo', 'Xiaohui Fan', 'Huajun Chen'] | ['cs.LG', 'cs.AI', 'cs.CE', 'cs.CL'] | The generation of molecules with desired properties has become increasingly
popular, revolutionizing the way scientists design molecular structures and
providing valuable support for chemical and drug design. However, despite the
potential of language models in molecule generation, they face challenges such
as generati... | 2023-01-26T17:52:56Z | ICLR 2024 | null | null | null | null | null | null | null | null | null |
2,301.1127 | Principled Reinforcement Learning with Human Feedback from Pairwise or
$K$-wise Comparisons | ['Banghua Zhu', 'Jiantao Jiao', 'Michael I. Jordan'] | ['cs.LG', 'cs.AI', 'cs.HC', 'math.ST', 'stat.ML', 'stat.TH'] | We provide a theoretical framework for Reinforcement Learning with Human
Feedback (RLHF). Our analysis shows that when the true reward function is
linear, the widely used maximum likelihood estimator (MLE) converges under both
the Bradley-Terry-Luce (BTL) model and the Plackett-Luce (PL) model. However,
we show that wh... | 2023-01-26T18:07:21Z | null | null | null | null | null | null | null | null | null | null |
2,301.11305 | DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability
Curvature | ['Eric Mitchell', 'Yoonho Lee', 'Alexander Khazatsky', 'Christopher D. Manning', 'Chelsea Finn'] | ['cs.CL', 'cs.AI'] | The increasing fluency and widespread usage of large language models (LLMs)
highlight the desirability of corresponding tools aiding detection of
LLM-generated text. In this paper, we identify a property of the structure of
an LLM's probability function that is useful for such detection. Specifically,
we demonstrate th... | 2023-01-26T18:44:06Z | ICML 2023 | null | null | null | null | null | null | null | null | null |
2,301.11308 | Neural Continuous-Discrete State Space Models for Irregularly-Sampled
Time Series | ['Abdul Fatir Ansari', 'Alvin Heng', 'Andre Lim', 'Harold Soh'] | ['cs.LG', 'cs.AI', 'stat.ML'] | Learning accurate predictive models of real-world dynamic phenomena (e.g.,
climate, biological) remains a challenging task. One key issue is that the data
generated by both natural and artificial processes often comprise time series
that are irregularly sampled and/or contain missing observations. In this work,
we prop... | 2023-01-26T18:45:04Z | ICML 2023 Camera Ready Version; Code available at
https://github.com/clear-nus/NCDSSM | null | null | null | null | null | null | null | null | null |
2,301.11325 | MusicLM: Generating Music From Text | ['Andrea Agostinelli', 'Timo I. Denk', 'Zalán Borsos', 'Jesse Engel', 'Mauro Verzetti', 'Antoine Caillon', 'Qingqing Huang', 'Aren Jansen', 'Adam Roberts', 'Marco Tagliasacchi', 'Matt Sharifi', 'Neil Zeghidour', 'Christian Frank'] | ['cs.SD', 'cs.LG', 'eess.AS'] | We introduce MusicLM, a model generating high-fidelity music from text
descriptions such as "a calming violin melody backed by a distorted guitar
riff". MusicLM casts the process of conditional music generation as a
hierarchical sequence-to-sequence modeling task, and it generates music at 24
kHz that remains consisten... | 2023-01-26T18:58:53Z | Supplementary material at
https://google-research.github.io/seanet/musiclm/examples and
https://kaggle.com/datasets/googleai/musiccaps | null | null | null | null | null | null | null | null | null |
2,301.11525 | Mixed Attention Network for Hyperspectral Image Denoising | ['Zeqiang Lai', 'Ying Fu'] | ['cs.CV', 'cs.LG', 'eess.IV'] | Hyperspectral image denoising is unique for the highly similar and correlated
spectral information that should be properly considered. However, existing
methods show limitations in exploring the spectral correlations across
different bands and feature interactions within each band. Besides, the low-
and high-level feat... | 2023-01-27T04:02:35Z | Code is available at https://github.com/Zeqiang-Lai/MAN. arXiv admin
note: text overlap with arXiv:2211.14811 | null | null | Mixed Attention Network for Hyperspectral Image Denoising | ['Zeqiang Lai', 'Ying Fu'] | 2,023 | arXiv.org | 15 | 46 | ['Computer Science', 'Engineering'] |
2,301.11699 | Image Restoration with Mean-Reverting Stochastic Differential Equations | ['Ziwei Luo', 'Fredrik K. Gustafsson', 'Zheng Zhao', 'Jens Sjölund', 'Thomas B. Schön'] | ['cs.LG', 'cs.CV'] | This paper presents a stochastic differential equation (SDE) approach for
general-purpose image restoration. The key construction consists in a
mean-reverting SDE that transforms a high-quality image into a degraded
counterpart as a mean state with fixed Gaussian noise. Then, by simulating the
corresponding reverse-tim... | 2023-01-27T13:20:48Z | Accepted by ICML 2023; Project page:
https://algolzw.github.io/ir-sde/index.html | null | null | null | null | null | null | null | null | null |
2,301.11757 | Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion | ['Flavio Schneider', 'Ojasv Kamal', 'Zhijing Jin', 'Bernhard Schölkopf'] | ['cs.CL', 'cs.LG', 'cs.SD', 'eess.AS'] | Recent years have seen the rapid development of large generative models for
text; however, much less research has explored the connection between text and
another "language" of communication -- music. Music, much like text, can convey
emotions, stories, and ideas, and has its own unique structure and syntax. In
our wor... | 2023-01-27T14:52:53Z | null | null | null | null | null | null | null | null | null | null |
2,301.11796 | Call for Papers -- The BabyLM Challenge: Sample-efficient pretraining on
a developmentally plausible corpus | ['Alex Warstadt', 'Leshem Choshen', 'Aaron Mueller', 'Adina Williams', 'Ethan Wilcox', 'Chengxu Zhuang'] | ['cs.CL'] | We present the call for papers for the BabyLM Challenge: Sample-efficient
pretraining on a developmentally plausible corpus. This shared task is intended
for participants with an interest in small scale language modeling, human
language acquisition, low-resource NLP, and cognitive modeling. In partnership
with CoNLL an... | 2023-01-27T15:52:50Z | null | null | null | null | null | null | null | null | null | null |
2,301.11975 | Byte Pair Encoding for Symbolic Music | ['Nathan Fradet', 'Nicolas Gutowski', 'Fabien Chhel', 'Jean-Pierre Briot'] | ['cs.LG', 'cs.AI', 'cs.SD', 'eess.AS'] | When used with deep learning, the symbolic music modality is often coupled
with language model architectures. To do so, the music needs to be tokenized,
i.e. converted into a sequence of discrete tokens. This can be achieved by
different approaches, as music can be composed of simultaneous tracks, of
simultaneous notes... | 2023-01-27T20:22:18Z | EMNLP 2023, source code: https://github.com/Natooz/BPE-Symbolic-Music | null | null | null | null | null | null | null | null | null |
2,301.1204 | ProtST: Multi-Modality Learning of Protein Sequences and Biomedical
Texts | ['Minghao Xu', 'Xinyu Yuan', 'Santiago Miret', 'Jian Tang'] | ['q-bio.BM', 'cs.LG'] | Current protein language models (PLMs) learn protein representations mainly
based on their sequences, thereby well capturing co-evolutionary information,
but they are unable to explicitly acquire protein functions, which is the end
goal of protein representation learning. Fortunately, for many proteins, their
textual p... | 2023-01-28T00:58:48Z | Accpeted by ICML 2023 (Oral), code and data released | null | null | null | null | null | null | null | null | null |
2,301.12149 | POSTER++: A simpler and stronger facial expression recognition network | ['Jiawei Mao', 'Rui Xu', 'Xuesong Yin', 'Yuanqi Chang', 'Binling Nie', 'Aibin Huang'] | ['cs.CV'] | Facial expression recognition (FER) plays an important role in a variety of
real-world applications such as human-computer interaction. POSTER achieves the
state-of-the-art (SOTA) performance in FER by effectively combining facial
landmark and image features through two-stream pyramid cross-fusion design.
However, the ... | 2023-01-28T10:23:44Z | null | null | null | null | null | null | null | null | null | null |
2,301.12247 | SEGA: Instructing Text-to-Image Models using Semantic Guidance | ['Manuel Brack', 'Felix Friedrich', 'Dominik Hintersdorf', 'Lukas Struppek', 'Patrick Schramowski', 'Kristian Kersting'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Text-to-image diffusion models have recently received a lot of interest for
their astonishing ability to produce high-fidelity images from text only.
However, achieving one-shot generation that aligns with the user's intent is
nearly impossible, yet small changes to the input prompt often result in very
different image... | 2023-01-28T16:43:07Z | arXiv admin note: text overlap with arXiv:2212.06013 Proceedings of
the Advances in Neural Information Processing Systems: Annual Conference on
Neural Information Processing Systems (NeurIPS) | null | null | SEGA: Instructing Text-to-Image Models using Semantic Guidance | ['Manuel Brack', 'Felix Friedrich', 'Dominik Hintersdorf', 'Lukas Struppek', 'P. Schramowski', 'K. Kersting'] | 2,023 | Neural Information Processing Systems | 0 | 35 | ['Computer Science'] |
2,301.12307 | MQAG: Multiple-choice Question Answering and Generation for Assessing
Information Consistency in Summarization | ['Potsawee Manakul', 'Adian Liusie', 'Mark J. F. Gales'] | ['cs.CL'] | State-of-the-art summarization systems can generate highly fluent summaries.
These summaries, however, may contain factual inconsistencies and/or
information not present in the source. Hence, an important component of
assessing the quality of summaries is to determine whether there is information
consistency between th... | 2023-01-28T23:08:25Z | AACL 2023 | null | null | MQAG: Multiple-choice Question Answering and Generation for Assessing Information Consistency in Summarization | ['Potsawee Manakul', 'Adian Liusie', 'M. Gales'] | 2,023 | International Joint Conference on Natural Language Processing | 36 | 56 | ['Computer Science'] |
2,301.12503 | AudioLDM: Text-to-Audio Generation with Latent Diffusion Models | ['Haohe Liu', 'Zehua Chen', 'Yi Yuan', 'Xinhao Mei', 'Xubo Liu', 'Danilo Mandic', 'Wenwu Wang', 'Mark D. Plumbley'] | ['cs.SD', 'cs.AI', 'cs.MM', 'eess.AS', 'eess.SP'] | Text-to-audio (TTA) system has recently gained attention for its ability to
synthesize general audio based on text descriptions. However, previous studies
in TTA have limited generation quality with high computational costs. In this
study, we propose AudioLDM, a TTA system that is built on a latent space to
learn the c... | 2023-01-29T17:48:17Z | Accepted by ICML 2023. Demo and implementation at
https://audioldm.github.io. Evaluation toolbox at
https://github.com/haoheliu/audioldm_eval | null | null | null | null | null | null | null | null | null |
2,301.12586 | Unifying Molecular and Textual Representations via Multi-task Language
Modelling | ['Dimitrios Christofidellis', 'Giorgio Giannone', 'Jannis Born', 'Ole Winther', 'Teodoro Laino', 'Matteo Manica'] | ['cs.LG', 'cs.CL'] | The recent advances in neural language models have also been successfully
applied to the field of chemistry, offering generative solutions for classical
problems in molecular design and synthesis planning. These new methods have the
potential to fuel a new era of data-driven automation in scientific discovery.
However,... | 2023-01-29T23:56:45Z | ICML 2023 | null | null | Unifying Molecular and Textual Representations via Multi-task Language Modelling | ['Dimitrios Christofidellis', 'Giorgio Giannone', 'Jannis Born', 'O. Winther', 'T. Laino', 'M. Manica'] | 2,023 | International Conference on Machine Learning | 89 | 63 | ['Computer Science'] |
2,301.12597 | BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image
Encoders and Large Language Models | ['Junnan Li', 'Dongxu Li', 'Silvio Savarese', 'Steven Hoi'] | ['cs.CV'] | The cost of vision-and-language pre-training has become increasingly
prohibitive due to end-to-end training of large-scale models. This paper
proposes BLIP-2, a generic and efficient pre-training strategy that bootstraps
vision-language pre-training from off-the-shelf frozen pre-trained image
encoders and frozen large ... | 2023-01-30T00:56:51Z | null | null | null | null | null | null | null | null | null | null |
2,301.12661 | Make-An-Audio: Text-To-Audio Generation with Prompt-Enhanced Diffusion
Models | ['Rongjie Huang', 'Jiawei Huang', 'Dongchao Yang', 'Yi Ren', 'Luping Liu', 'Mingze Li', 'Zhenhui Ye', 'Jinglin Liu', 'Xiang Yin', 'Zhou Zhao'] | ['cs.SD', 'cs.LG', 'cs.MM', 'eess.AS'] | Large-scale multimodal generative modeling has created milestones in
text-to-image and text-to-video generation. Its application to audio still lags
behind for two main reasons: the lack of large-scale datasets with high-quality
text-audio pairs, and the complexity of modeling long continuous audio data. In
this work, ... | 2023-01-30T04:44:34Z | Audio samples are available at https://Text-to-Audio.github.io | null | null | null | null | null | null | null | null | null |
2,301.12847 | Finding the Law: Enhancing Statutory Article Retrieval via Graph Neural
Networks | ['Antoine Louis', 'Gijs van Dijck', 'Gerasimos Spanakis'] | ['cs.IR', 'cs.CL'] | Statutory article retrieval (SAR), the task of retrieving statute law
articles relevant to a legal question, is a promising application of legal text
processing. In particular, high-quality SAR systems can improve the work
efficiency of legal professionals and provide basic legal assistance to
citizens in need at no co... | 2023-01-30T12:59:09Z | EACL 2023. Code is available at
https://github.com/maastrichtlawtech/gdsr | null | null | null | null | null | null | null | null | null |
2,301.13126 | LEXTREME: A Multi-Lingual and Multi-Task Benchmark for the Legal Domain | ['Joel Niklaus', 'Veton Matoshi', 'Pooja Rani', 'Andrea Galassi', 'Matthias Stürmer', 'Ilias Chalkidis'] | ['cs.CL', 'cs.AI', 'cs.LG', '68T50', 'I.2'] | Lately, propelled by the phenomenal advances around the transformer
architecture, the legal NLP field has enjoyed spectacular growth. To measure
progress, well curated and challenging benchmarks are crucial. However, most
benchmarks are English only and in legal NLP specifically there is no
multilingual benchmark avail... | 2023-01-30T18:05:08Z | Published at EMNLP Findings 2023 | EMNLP Findings 2023 | 10.18653/v1/2023.findings-emnlp.200 | LEXTREME: A Multi-Lingual and Multi-Task Benchmark for the Legal Domain | ['Joel Niklaus', 'Veton Matoshi', 'Pooja Rani', 'Andrea Galassi', 'Matthias Sturmer', 'Ilias Chalkidis'] | 2,023 | Conference on Empirical Methods in Natural Language Processing | 60 | 112 | ['Computer Science'] |
2,301.13155 | Advancing Radiograph Representation Learning with Masked Record Modeling | ['Hong-Yu Zhou', 'Chenyu Lian', 'Liansheng Wang', 'Yizhou Yu'] | ['cs.CV', 'cs.CL', 'cs.LG'] | Modern studies in radiograph representation learning rely on either
self-supervision to encode invariant semantics or associated radiology reports
to incorporate medical expertise, while the complementarity between them is
barely noticed. To explore this, we formulate the self- and report-completion
as two complementar... | 2023-01-30T18:33:32Z | Camera ready at ICLR 2023. Code and models are available at
https://github.com/RL4M/MRM-pytorch | null | null | null | null | null | null | null | null | null |
2,301.13276 | Distributed Swarm Intelligence | ['Karthik Reddy Kanjula', 'Sai Meghana Kolla'] | ['cs.AI'] | This paper presents the development of a distributed application that
facilitates the understanding and application of swarm intelligence in solving
optimization problems. The platform comprises a search space of customizable
random particles, allowing users to tailor the solution to their specific
needs. By leveraging... | 2023-01-30T20:36:35Z | 7 pages, 3 Figure, 1 Algorithm | null | null | Distributed Swarm Intelligence | ['Karthik Reddy Kanjula', 'Sai Meghana Kolla'] | 2,023 | arXiv.org | 0 | 8 | ['Computer Science'] |
2,301.1343 | GeneFace: Generalized and High-Fidelity Audio-Driven 3D Talking Face
Synthesis | ['Zhenhui Ye', 'Ziyue Jiang', 'Yi Ren', 'Jinglin Liu', 'JinZheng He', 'Zhou Zhao'] | ['cs.CV'] | Generating photo-realistic video portrait with arbitrary speech audio is a
crucial problem in film-making and virtual reality. Recently, several works
explore the usage of neural radiance field in this task to improve 3D realness
and image fidelity. However, the generalizability of previous NeRF-based
methods to out-of... | 2023-01-31T05:56:06Z | Accepted by ICLR2023. Project page: https://geneface.github.io/ | null | null | GeneFace: Generalized and High-Fidelity Audio-Driven 3D Talking Face Synthesis | ['Zhenhui Ye', 'Ziyue Jiang', 'Yi Ren', 'Jinglin Liu', 'Jinzheng He', 'Zhou Zhao'] | 2,023 | International Conference on Learning Representations | 130 | 40 | ['Computer Science'] |
2,301.13688 | The Flan Collection: Designing Data and Methods for Effective
Instruction Tuning | ['Shayne Longpre', 'Le Hou', 'Tu Vu', 'Albert Webson', 'Hyung Won Chung', 'Yi Tay', 'Denny Zhou', 'Quoc V. Le', 'Barret Zoph', 'Jason Wei', 'Adam Roberts'] | ['cs.AI', 'cs.CL', 'cs.LG'] | We study the design decisions of publicly available instruction tuning
methods, and break down the development of Flan 2022 (Chung et al., 2022).
Through careful ablation studies on the Flan Collection of tasks and methods,
we tease apart the effect of design decisions which enable Flan-T5 to
outperform prior work by 3... | 2023-01-31T15:03:44Z | null | null | null | null | null | null | null | null | null | null |
2,302.00275 | Learning Generalized Zero-Shot Learners for Open-Domain Image
Geolocalization | ['Lukas Haas', 'Silas Alberti', 'Michal Skreta'] | ['cs.CV', 'cs.LG'] | Image geolocalization is the challenging task of predicting the geographic
coordinates of origin for a given photo. It is an unsolved problem relying on
the ability to combine visual clues with general knowledge about the world to
make accurate predictions across geographies. We present
$\href{https://huggingface.co/ge... | 2023-02-01T06:44:07Z | null | null | null | null | null | null | null | null | null | null |
2,302.00856 | idT5: Indonesian Version of Multilingual T5 Transformer | ['Mukhlish Fuadi', 'Adhi Dharma Wibawa', 'Surya Sumpeno'] | ['cs.CL', 'I.2.7'] | Indonesian language is spoken by almost 200 million people and is the 10th
most spoken language in the world, but it is under-represented in NLP (Natural
Language Processing) research. A sparsity of language resources has hampered
previous work on Indonesian. The Transformer is a new architecture rapidly
becoming domin... | 2023-02-02T03:56:16Z | This work has been submitted to the IEEE for possible publication | null | null | null | null | null | null | null | null | null |
2,302.00923 | Multimodal Chain-of-Thought Reasoning in Language Models | ['Zhuosheng Zhang', 'Aston Zhang', 'Mu Li', 'Hai Zhao', 'George Karypis', 'Alex Smola'] | ['cs.CL', 'cs.AI', 'cs.CV'] | Large language models (LLMs) have shown impressive performance on complex
reasoning by leveraging chain-of-thought (CoT) prompting to generate
intermediate reasoning chains as the rationale to infer the answer. However,
existing CoT studies have primarily focused on the language modality. We
propose Multimodal-CoT that... | 2023-02-02T07:51:19Z | Published in Transactions on Machine Learning Research | null | null | null | null | null | null | null | null | null |
2,302.0111 | DirectMHP: Direct 2D Multi-Person Head Pose Estimation with Full-range
Angles | ['Huayi Zhou', 'Fei Jiang', 'Hongtao Lu'] | ['cs.CV'] | Existing head pose estimation (HPE) mainly focuses on single person with
pre-detected frontal heads, which limits their applications in real complex
scenarios with multi-persons. We argue that these single HPE methods are
fragile and inefficient for Multi-Person Head Pose Estimation (MPHPE) since
they rely on the separ... | 2023-02-02T14:08:49Z | 13 pages | null | null | null | null | null | null | null | null | null |
2,302.0133 | SceneDreamer: Unbounded 3D Scene Generation from 2D Image Collections | ['Zhaoxi Chen', 'Guangcong Wang', 'Ziwei Liu'] | ['cs.CV', 'cs.GR'] | In this work, we present SceneDreamer, an unconditional generative model for
unbounded 3D scenes, which synthesizes large-scale 3D landscapes from random
noise. Our framework is learned from in-the-wild 2D image collections only,
without any 3D annotations. At the core of SceneDreamer is a principled
learning paradigm ... | 2023-02-02T18:59:16Z | IEEE Transactions on Pattern Analysis & Machine Intelligence (TPAMI)
2023; Project Page https://scene-dreamer.github.io/ Code
https://github.com/FrozenBurning/SceneDreamer | null | 10.1109/TPAMI.2023.3321857 | null | null | null | null | null | null | null |
2,302.01398 | The unreasonable effectiveness of few-shot learning for machine
translation | ['Xavier Garcia', 'Yamini Bansal', 'Colin Cherry', 'George Foster', 'Maxim Krikun', 'Fangxiaoyu Feng', 'Melvin Johnson', 'Orhan Firat'] | ['cs.CL'] | We demonstrate the potential of few-shot translation systems, trained with
unpaired language data, for both high and low-resource language pairs. We show
that with only 5 examples of high-quality translation data shown at inference,
a transformer decoder-only model trained solely with self-supervised learning,
is able ... | 2023-02-02T20:19:46Z | null | null | null | null | null | null | null | null | null | null |
2,302.01588 | Bioformer: an efficient transformer language model for biomedical text
mining | ['Li Fang', 'Qingyu Chen', 'Chih-Hsuan Wei', 'Zhiyong Lu', 'Kai Wang'] | ['cs.CL'] | Pretrained language models such as Bidirectional Encoder Representations from
Transformers (BERT) have achieved state-of-the-art performance in natural
language processing (NLP) tasks. Recently, BERT has been adapted to the
biomedical domain. Despite the effectiveness, these models have hundreds of
millions of paramete... | 2023-02-03T08:04:59Z | null | null | null | null | null | null | null | null | null | null |
2,302.01649 | Structure-informed Language Models Are Protein Designers | ['Zaixiang Zheng', 'Yifan Deng', 'Dongyu Xue', 'Yi Zhou', 'Fei YE', 'Quanquan Gu'] | ['cs.LG'] | This paper demonstrates that language models are strong structure-based
protein designers. We present LM-Design, a generic approach to reprogramming
sequence-based protein language models (pLMs), that have learned massive
sequential evolutionary knowledge from the universe of natural protein
sequences, to acquire an im... | 2023-02-03T10:49:52Z | 10 pages; ver.2 update: added image credit to RFdiffusion (Watson et
al., 2022) in Fig. 1F, and fixed some small presentation errors | null | null | Structure-informed Language Models Are Protein Designers | ['Zaixiang Zheng', 'Yifan Deng', 'Dongyu Xue', 'Yi Zhou', 'YE Fei', 'Quanquan Gu'] | 2,023 | bioRxiv | 104 | 93 | ['Computer Science', 'Biology'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.