arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,312.11011 | VinaLLaMA: LLaMA-based Vietnamese Foundation Model | ['Quan Nguyen', 'Huy Pham', 'Dung Dao'] | ['cs.CL'] | In this technical report, we present VinaLLaMA, an open-weight,
state-of-the-art (SOTA) Large Language Model for the Vietnamese language, built
upon LLaMA-2 with an additional 800 billion trained tokens. VinaLLaMA not only
demonstrates fluency in Vietnamese but also exhibits a profound understanding
of Vietnamese cultu... | 2023-12-18T08:27:33Z | VinaLLaMA Technical Report - 13 pages | null | null | null | null | null | null | null | null | null |
2,312.11193 | Training With "Paraphrasing the Original Text" Teaches LLM to Better
Retrieve in Long-context Tasks | ['Yijiong Yu', 'Yongfeng Huang', 'Zhixiao Qi', 'Zhe Zhou'] | ['cs.CL', 'cs.AI'] | As Large Language Models (LLMs) continue to evolve, more are being designed
to handle long-context inputs. Despite this advancement, most of them still
face challenges in accurately handling long-context tasks, often showing the
"lost in the middle" issue. We identify that insufficient retrieval capability
is one of th... | 2023-12-18T13:40:16Z | Our code and datasets are available at
https://github.com/yuyijiong/train_with_paraphrasing | null | null | null | null | null | null | null | null | null |
2,312.11243 | GraspLDM: Generative 6-DoF Grasp Synthesis using Latent Diffusion Models | ['Kuldeep R Barad', 'Andrej Orsula', 'Antoine Richard', 'Jan Dentler', 'Miguel Olivares-Mendez', 'Carol Martinez'] | ['cs.RO'] | Vision-based grasping of unknown objects in unstructured environments is a
key challenge for autonomous robotic manipulation. A practical grasp synthesis
system is required to generate a diverse set of 6-DoF grasps from which a
task-relevant grasp can be executed. Although generative models are suitable
for learning su... | 2023-12-18T14:40:45Z | null | IEEE Access, vol. 12, pp. 164621-164633, 2024 | 10.1109/ACCESS.2024.3492118 | null | null | null | null | null | null | null |
2,312.11392 | SCEdit: Efficient and Controllable Image Diffusion Generation via Skip
Connection Editing | ['Zeyinzi Jiang', 'Chaojie Mao', 'Yulin Pan', 'Zhen Han', 'Jingfeng Zhang'] | ['cs.CV'] | Image diffusion models have been utilized in various tasks, such as
text-to-image generation and controllable image synthesis. Recent research has
introduced tuning methods that make subtle adjustments to the original models,
yielding promising results in specific adaptations of foundational generative
diffusion models... | 2023-12-18T17:54:14Z | null | null | null | null | null | null | null | null | null | null |
2,312.11456 | Iterative Preference Learning from Human Feedback: Bridging Theory and
Practice for RLHF under KL-Constraint | ['Wei Xiong', 'Hanze Dong', 'Chenlu Ye', 'Ziqi Wang', 'Han Zhong', 'Heng Ji', 'Nan Jiang', 'Tong Zhang'] | ['cs.LG', 'cs.AI', 'stat.ML'] | This paper studies the alignment process of generative models with
Reinforcement Learning from Human Feedback (RLHF). We first identify the
primary challenges of existing popular methods like offline PPO and offline DPO
as lacking in strategical exploration of the environment. Then, to understand
the mathematical princ... | 2023-12-18T18:58:42Z | 53 pages; theoretical study and algorithmic design of iterative RLHF
and DPO | null | null | null | null | null | null | null | null | null |
2,312.11502 | Labrador: Exploring the Limits of Masked Language Modeling for
Laboratory Data | ['David R. Bellamy', 'Bhawesh Kumar', 'Cindy Wang', 'Andrew Beam'] | ['cs.CL', 'cs.AI', 'cs.LG'] | In this work we introduce Labrador, a pre-trained Transformer model for
laboratory data. Labrador and BERT were pre-trained on a corpus of 100 million
lab test results from electronic health records (EHRs) and evaluated on various
downstream outcome prediction tasks. Both models demonstrate mastery of the
pre-training ... | 2023-12-09T23:43:35Z | 26 pages, 8 figures, best paper award at ML4H 2024 | null | null | null | null | null | null | null | null | null |
2,312.11556 | StarVector: Generating Scalable Vector Graphics Code from Images and
Text | ['Juan A. Rodriguez', 'Abhay Puri', 'Shubham Agarwal', 'Issam H. Laradji', 'Pau Rodriguez', 'Sai Rajeswar', 'David Vazquez', 'Christopher Pal', 'Marco Pedersoli'] | ['cs.CV', 'cs.AI', 'cs.CL'] | Scalable Vector Graphics (SVGs) are vital for modern image rendering due to
their scalability and versatility. Previous SVG generation methods have focused
on curve-based vectorization, lacking semantic understanding, often producing
artifacts, and struggling with SVG primitives beyond path curves. To address
these iss... | 2023-12-17T08:07:32Z | null | null | null | StarVector: Generating Scalable Vector Graphics Code from Images and Text | ['Juan A. Rodriguez', 'Abhay Puri', 'Shubham Agarwal', 'I. Laradji', 'Pau Rodríguez', 'Sai Rajeswar', 'David Vázquez', 'Christopher Pal', 'Marco Pedersoli'] | 2,023 | AAAI Conference on Artificial Intelligence | 11 | 94 | ['Computer Science'] |
2,312.11805 | Gemini: A Family of Highly Capable Multimodal Models | ['Gemini Team', 'Rohan Anil', 'Sebastian Borgeaud', 'Jean-Baptiste Alayrac', 'Jiahui Yu', 'Radu Soricut', 'Johan Schalkwyk', 'Andrew M. Dai', 'Anja Hauth', 'Katie Millican', 'David Silver', 'Melvin Johnson', 'Ioannis Antonoglou', 'Julian Schrittwieser', 'Amelia Glaese', 'Jilin Chen', 'Emily Pitler', 'Timothy Lillicrap'... | ['cs.CL', 'cs.AI', 'cs.CV'] | This report introduces a new family of multimodal models, Gemini, that
exhibit remarkable capabilities across image, audio, video, and text
understanding. The Gemini family consists of Ultra, Pro, and Nano sizes,
suitable for applications ranging from complex reasoning tasks to on-device
memory-constrained use-cases. E... | 2023-12-19T02:39:27Z | null | null | null | null | null | null | null | null | null | null |
2,312.11894 | 3D-LFM: Lifting Foundation Model | ['Mosam Dabhi', 'Laszlo A. Jeni', 'Simon Lucey'] | ['cs.CV', 'cs.AI', 'cs.LG'] | The lifting of 3D structure and camera from 2D landmarks is at the
cornerstone of the entire discipline of computer vision. Traditional methods
have been confined to specific rigid objects, such as those in
Perspective-n-Point (PnP) problems, but deep learning has expanded our
capability to reconstruct a wide range of ... | 2023-12-19T06:38:18Z | Visit the project page at https://3dlfm.github.io for links to
additional media, code, and videos. The site also features a custom GPT
tailored to address queries related to 3D-LFM. Accepted at CVPR 2024 | null | null | 3D-LFM: Lifting Foundation Model | ['Mosam Dabhi', 'László A. Jeni', 'Simon Lucey'] | 2,023 | Computer Vision and Pattern Recognition | 5 | 33 | ['Computer Science'] |
2,312.11983 | Fluctuation-based Adaptive Structured Pruning for Large Language Models | ['Yongqi An', 'Xu Zhao', 'Tao Yu', 'Ming Tang', 'Jinqiao Wang'] | ['cs.CL', 'cs.AI'] | Network Pruning is a promising way to address the huge computing resource
demands of the deployment and inference of Large Language Models (LLMs).
Retraining-free is important for LLMs' pruning methods. However, almost all of
the existing retraining-free pruning approaches for LLMs focus on unstructured
pruning, which ... | 2023-12-19T09:23:48Z | Accepted to AAAI 2024 | null | null | null | null | null | null | null | null | null |
2,312.12337 | pixelSplat: 3D Gaussian Splats from Image Pairs for Scalable
Generalizable 3D Reconstruction | ['David Charatan', 'Sizhe Li', 'Andrea Tagliasacchi', 'Vincent Sitzmann'] | ['cs.CV', 'cs.LG'] | We introduce pixelSplat, a feed-forward model that learns to reconstruct 3D
radiance fields parameterized by 3D Gaussian primitives from pairs of images.
Our model features real-time and memory-efficient rendering for scalable
training as well as fast 3D reconstruction at inference time. To overcome local
minima inhere... | 2023-12-19T17:03:50Z | Project page: https://dcharatan.github.io/pixelsplat | null | null | null | null | null | null | null | null | null |
2,312.12379 | Mixture of Cluster-conditional LoRA Experts for Vision-language
Instruction Tuning | ['Yunhao Gou', 'Zhili Liu', 'Kai Chen', 'Lanqing Hong', 'Hang Xu', 'Aoxue Li', 'Dit-Yan Yeung', 'James T. Kwok', 'Yu Zhang'] | ['cs.CV'] | Instruction tuning of Large Vision-language Models (LVLMs) has revolutionized
the development of versatile models with zero-shot generalization across a wide
range of downstream vision-language tasks. However, the diversity of training
tasks of different sources and formats would lead to inevitable task conflicts,
wher... | 2023-12-19T18:11:19Z | Project website: https://gyhdog99.github.io/projects/mocle/ | null | null | null | null | null | null | null | null | null |
2,312.12433 | TAO-Amodal: A Benchmark for Tracking Any Object Amodally | ['Cheng-Yen Hsieh', 'Kaihua Chen', 'Achal Dave', 'Tarasha Khurana', 'Deva Ramanan'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Amodal perception, the ability to comprehend complete object structures from
partial visibility, is a fundamental skill, even for infants. Its significance
extends to applications like autonomous driving, where a clear understanding of
heavily occluded objects is essential. However, modern detection and tracking
algori... | 2023-12-19T18:58:40Z | Project Page: https://tao-amodal.github.io | null | null | TAO-Amodal: A Benchmark for Tracking Any Object Amodally | ['Cheng-Yen Hsieh', 'Kaihua Chen', 'Achal Dave', 'Tarasha Khurana', 'Deva Ramanan'] | 2,023 | null | 0 | 72 | ['Computer Science'] |
2,312.1245 | Can It Edit? Evaluating the Ability of Large Language Models to Follow
Code Editing Instructions | ['Federico Cassano', 'Luisa Li', 'Akul Sethi', 'Noah Shinn', 'Abby Brennan-Jones', 'Jacob Ginesin', 'Edward Berman', 'George Chakhnashvili', 'Anton Lozhkov', 'Carolyn Jane Anderson', 'Arjun Guha'] | ['cs.SE', 'cs.AI', 'cs.LG', 'cs.PL'] | A significant amount of research is focused on developing and evaluating
large language models for a variety of code synthesis tasks. These include
synthesizing code from natural language, synthesizing tests from code, and
synthesizing explanations of code. In contrast, the behavior of instructional
code editing with L... | 2023-12-11T02:27:45Z | null | null | null | Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions | ['Federico Cassano', 'Luisa Li', 'Akul Sethi', 'Noah Shinn', 'Abby Brennan-Jones', 'Anton Lozhkov', 'C. Anderson', 'Arjun Guha'] | 2,023 | arXiv.org | 27 | 60 | ['Computer Science'] |
2,312.12456 | PowerInfer: Fast Large Language Model Serving with a Consumer-grade GPU | ['Yixin Song', 'Zeyu Mi', 'Haotong Xie', 'Haibo Chen'] | ['cs.LG', 'cs.OS'] | This paper introduces PowerInfer, a high-speed Large Language Model (LLM)
inference engine on a personal computer (PC) equipped with a single
consumer-grade GPU. The key principle underlying the design of PowerInfer is
exploiting the high locality inherent in LLM inference, characterized by a
power-law distribution in ... | 2023-12-16T02:27:00Z | SOSP 2024 | null | null | null | null | null | null | null | null | null |
2,312.12852 | Language Resources for Dutch Large Language Modelling | ['Bram Vanroy'] | ['cs.CL', 'cs.AI'] | Despite the rapid expansion of types of large language models, there remains
a notable gap in models specifically designed for the Dutch language. This gap
is not only a shortage in terms of pretrained Dutch models but also in terms of
data, and benchmarks and leaderboards. This work provides a small step to
improve th... | 2023-12-20T09:06:06Z | null | null | null | null | null | null | null | null | null | null |
2,312.12865 | RadEdit: stress-testing biomedical vision models via diffusion image
editing | ['Fernando Pérez-García', 'Sam Bond-Taylor', 'Pedro P. Sanchez', 'Boris van Breugel', 'Daniel C. Castro', 'Harshita Sharma', 'Valentina Salvatelli', 'Maria T. A. Wetscherek', 'Hannah Richardson', 'Matthew P. Lungren', 'Aditya Nori', 'Javier Alvarez-Valle', 'Ozan Oktay', 'Maximilian Ilse'] | ['cs.CV', 'cs.AI'] | Biomedical imaging datasets are often small and biased, meaning that
real-world performance of predictive models can be substantially lower than
expected from internal testing. This work proposes using generative image
editing to simulate dataset shifts and diagnose failure modes of biomedical
vision models; this can b... | 2023-12-20T09:27:41Z | null | European Conference on Computer Vision (ECCV) 2024 | 10.1007/978-3-031-73254-6_21 | null | null | null | null | null | null | null |
2,312.12999 | Machine Mindset: An MBTI Exploration of Large Language Models | ['Jiaxi Cui', 'Liuzhenghao Lv', 'Jing Wen', 'Rongsheng Wang', 'Jing Tang', 'YongHong Tian', 'Li Yuan'] | ['cs.CL'] | We present a novel approach for integrating Myers-Briggs Type Indicator
(MBTI) personality traits into large language models (LLMs), addressing the
challenges of personality consistency in personalized AI. Our method, "Machine
Mindset," involves a two-phase fine-tuning and Direct Preference Optimization
(DPO) to embed ... | 2023-12-20T12:59:31Z | null | null | null | Machine Mindset: An MBTI Exploration of Large Language Models | ['Jiaxi Cui', 'Liuzhenghao Lv', 'Jing Wen', 'Rongsheng Wang', 'Jing Tang', 'Yonghong Tian', 'Li Yuan'] | 2,023 | arXiv.org | 6 | 8 | ['Computer Science'] |
2,312.13286 | Generative Multimodal Models are In-Context Learners | ['Quan Sun', 'Yufeng Cui', 'Xiaosong Zhang', 'Fan Zhang', 'Qiying Yu', 'Zhengxiong Luo', 'Yueze Wang', 'Yongming Rao', 'Jingjing Liu', 'Tiejun Huang', 'Xinlong Wang'] | ['cs.CV'] | The human ability to easily solve multimodal tasks in context (i.e., with
only a few demonstrations or simple instructions), is what current multimodal
systems have largely struggled to imitate. In this work, we demonstrate that
the task-agnostic in-context learning capabilities of large multimodal models
can be signif... | 2023-12-20T18:59:58Z | Accepted to CVPR 2024. Project page:
https://baaivision.github.io/emu2 | null | null | Generative Multimodal Models are In-Context Learners | ['Quan Sun', 'Yufeng Cui', 'Xiaosong Zhang', 'Fan Zhang', 'Qiying Yu', 'Zhengxiong Luo', 'Yueze Wang', 'Yongming Rao', 'Jingjing Liu', 'Tiejun Huang', 'Xinlong Wang'] | 2,023 | Computer Vision and Pattern Recognition | 291 | 94 | ['Computer Science'] |
2,312.13322 | MonoCoder: Domain-Specific Code Language Model for HPC Codes and Tasks | ['Tal Kadosh', 'Niranjan Hasabnis', 'Vy A. Vo', 'Nadav Schneider', 'Neva Krien', 'Mihai Capota', 'Abdul Wasay', 'Nesreen Ahmed', 'Ted Willke', 'Guy Tamir', 'Yuval Pinter', 'Timothy Mattson', 'Gal Oren'] | ['cs.PL', 'cs.AI', 'cs.LG', 'cs.SE'] | With easier access to powerful compute resources, there is a growing trend in
AI for software development to develop large language models (LLMs) to address
a variety of programming tasks. Even LLMs applied to tasks from the
high-performance computing (HPC) domain are huge in size and demand expensive
compute resources... | 2023-12-20T15:11:06Z | null | null | null | null | null | null | null | null | null | null |
2,312.13558 | The Truth is in There: Improving Reasoning in Language Models with
Layer-Selective Rank Reduction | ['Pratyusha Sharma', 'Jordan T. Ash', 'Dipendra Misra'] | ['cs.LG', 'cs.AI', 'cs.CL', 'cs.CV'] | Transformer-based Large Language Models (LLMs) have become a fixture in
modern machine learning. Correspondingly, significant resources are allocated
towards research that aims to further advance this technology, typically
resulting in models of increasing size that are trained on increasing amounts
of data. This work,... | 2023-12-21T03:51:08Z | null | null | null | The Truth is in There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction | ['Pratyusha Sharma', 'Jordan T. Ash', 'Dipendra Misra'] | 2,023 | International Conference on Learning Representations | 92 | 44 | ['Computer Science'] |
2,312.13789 | TinySAM: Pushing the Envelope for Efficient Segment Anything Model | ['Han Shu', 'Wenshuo Li', 'Yehui Tang', 'Yiman Zhang', 'Yihao Chen', 'Houqiang Li', 'Yunhe Wang', 'Xinghao Chen'] | ['cs.CV'] | Recently segment anything model (SAM) has shown powerful segmentation
capability and has drawn great attention in computer vision fields. Massive
following works have developed various applications based on the pre-trained
SAM and achieved impressive performance on downstream vision tasks. However,
SAM consists of heav... | 2023-12-21T12:26:11Z | AAAI 2025 | null | null | TinySAM: Pushing the Envelope for Efficient Segment Anything Model | ['Han Shu', 'Wenshuo Li', 'Yehui Tang', 'Yiman Zhang', 'Yihao Chen', 'Houqiang Li', 'Yunhe Wang', 'Xinghao Chen'] | 2,023 | AAAI Conference on Artificial Intelligence | 21 | 48 | ['Computer Science'] |
2,312.13913 | Paint3D: Paint Anything 3D with Lighting-Less Texture Diffusion Models | ['Xianfang Zeng', 'Xin Chen', 'Zhongqi Qi', 'Wen Liu', 'Zibo Zhao', 'Zhibin Wang', 'Bin Fu', 'Yong Liu', 'Gang Yu'] | ['cs.CV'] | This paper presents Paint3D, a novel coarse-to-fine generative framework that
is capable of producing high-resolution, lighting-less, and diverse 2K UV
texture maps for untextured 3D meshes conditioned on text or image inputs. The
key challenge addressed is generating high-quality textures without embedded
illumination... | 2023-12-21T15:01:47Z | Project Website: https://github.com/OpenTexture/Paint3D | null | null | null | null | null | null | null | null | null |
2,312.13951 | Typhoon: Thai Large Language Models | ['Kunat Pipatanakul', 'Phatrasek Jirabovonvisut', 'Potsawee Manakul', 'Sittipong Sripaisarnmongkol', 'Ruangsak Patomwong', 'Pathomporn Chokchainant', 'Kasima Tharnpipitchai'] | ['cs.CL', 'cs.AI'] | Typhoon is a series of Thai large language models (LLMs) developed
specifically for the Thai language. This technical report presents challenges
and insights in developing Thai LLMs, including data preparation, pretraining,
instruction-tuning, and evaluation. As one of the challenges of low-resource
languages is the am... | 2023-12-21T15:38:41Z | technical report, 12 pages | null | null | null | null | null | null | null | null | null |
2,312.14055 | Multi-Sentence Grounding for Long-term Instructional Video | ['Zeqian Li', 'Qirui Chen', 'Tengda Han', 'Ya Zhang', 'Yanfeng Wang', 'Weidi Xie'] | ['cs.CV'] | In this paper, we aim to establish an automatic, scalable pipeline for
denoising the large-scale instructional dataset and construct a high-quality
video-text dataset with multiple descriptive steps supervision, named
HowToStep. We make the following contributions: (i) improving the quality of
sentences in dataset by u... | 2023-12-21T17:28:09Z | null | null | null | null | null | null | null | null | null | null |
2,312.14057 | Weighted least-squares approximation with determinantal point processes
and generalized volume sampling | ['Anthony Nouy', 'Bertrand Michel'] | ['math.NA', 'cs.LG', 'cs.NA', 'math.ST', 'stat.TH'] | We consider the problem of approximating a function from $L^2$ by an element
of a given $m$-dimensional space $V_m$, associated with some feature map
$\varphi$, using evaluations of the function at random points $x_1,\dots,x_n$.
After recalling some results on optimal weighted least-squares using
independent and identi... | 2023-12-21T17:34:18Z | In this second version, conjecture (13) on DPP and (16) on volume
sampling have been modified, including a convexity requirement. Proofs of
propositions 5.4 and 5.12 have been modified accordingly. Remarks 5.5 and 5.6
have been added to discuss alternatives to conjecture (13) on DPP | null | null | null | null | null | null | null | null | null |
2,312.14115 | LingoQA: Visual Question Answering for Autonomous Driving | ['Ana-Maria Marcu', 'Long Chen', 'Jan Hünermann', 'Alice Karnsund', 'Benoit Hanotte', 'Prajwal Chidananda', 'Saurabh Nair', 'Vijay Badrinarayanan', 'Alex Kendall', 'Jamie Shotton', 'Elahe Arani', 'Oleg Sinavski'] | ['cs.RO', 'cs.AI', 'cs.CV'] | We introduce LingoQA, a novel dataset and benchmark for visual question
answering in autonomous driving. The dataset contains 28K unique short video
scenarios, and 419K annotations. Evaluating state-of-the-art vision-language
models on our benchmark shows that their performance is below human
capabilities, with GPT-4V ... | 2023-12-21T18:40:34Z | Accepted to ECCV 2024. Benchmark and dataset are available at
https://github.com/wayveai/LingoQA/ | null | null | LingoQA: Visual Question Answering for Autonomous Driving | ['Ana-Maria Marcu', 'Long Chen', 'Jan Hünermann', 'Alice Karnsund', 'Benoît Hanotte', 'Prajwal Chidananda', 'Saurabh Nair', 'Vijay Badrinarayanan', 'Alex Kendall', 'Jamie Shotton', 'Elahe Arani', 'Oleg Sinavski'] | 2,023 | European Conference on Computer Vision | 45 | 54 | ['Computer Science'] |
2,312.14125 | VideoPoet: A Large Language Model for Zero-Shot Video Generation | ['Dan Kondratyuk', 'Lijun Yu', 'Xiuye Gu', 'José Lezama', 'Jonathan Huang', 'Grant Schindler', 'Rachel Hornung', 'Vighnesh Birodkar', 'Jimmy Yan', 'Ming-Chang Chiu', 'Krishna Somandepalli', 'Hassan Akbari', 'Yair Alon', 'Yong Cheng', 'Josh Dillon', 'Agrim Gupta', 'Meera Hahn', 'Anja Hauth', 'David Hendon', 'Alonso Mart... | ['cs.CV', 'cs.AI'] | We present VideoPoet, a language model capable of synthesizing high-quality
video, with matching audio, from a large variety of conditioning signals.
VideoPoet employs a decoder-only transformer architecture that processes
multimodal inputs -- including images, videos, text, and audio. The training
protocol follows tha... | 2023-12-21T18:46:41Z | To appear at ICML 2024; Project page:
http://sites.research.google/videopoet/ | null | null | null | null | null | null | null | null | null |
2,312.14132 | DUSt3R: Geometric 3D Vision Made Easy | ['Shuzhe Wang', 'Vincent Leroy', 'Yohann Cabon', 'Boris Chidlovskii', 'Jerome Revaud'] | ['cs.CV'] | Multi-view stereo reconstruction (MVS) in the wild requires to first estimate
the camera parameters e.g. intrinsic and extrinsic parameters. These are
usually tedious and cumbersome to obtain, yet they are mandatory to triangulate
corresponding pixels in 3D space, which is the core of all best performing MVS
algorithms... | 2023-12-21T18:52:14Z | fixing the ref for StaticThings3D dataset | null | null | DUSt3R: Geometric 3D Vision Made Easy | ['Shuzhe Wang', 'Vincent Leroy', 'Yohann Cabon', 'Boris Chidlovskii', 'Jérôme Revaud'] | 2,023 | Computer Vision and Pattern Recognition | 406 | 196 | ['Computer Science'] |
2,312.14187 | WaveCoder: Widespread And Versatile Enhancement For Code Large Language
Models By Instruction Tuning | ['Zhaojian Yu', 'Xin Zhang', 'Ning Shang', 'Yangyu Huang', 'Can Xu', 'Yishujie Zhao', 'Wenxiang Hu', 'Qiufeng Yin'] | ['cs.CL', 'cs.AI', 'cs.SE'] | Recent work demonstrates that, after instruction tuning, Code Large Language
Models (Code LLMs) can obtain impressive capabilities to address a wide range
of code-related tasks. However, current instruction tuning methods for Code
LLMs mainly focus on the traditional code generation task, resulting in poor
performance ... | 2023-12-20T09:02:29Z | null | null | null | null | null | null | null | null | null | null |
2,312.14238 | InternVL: Scaling up Vision Foundation Models and Aligning for Generic
Visual-Linguistic Tasks | ['Zhe Chen', 'Jiannan Wu', 'Wenhai Wang', 'Weijie Su', 'Guo Chen', 'Sen Xing', 'Muyan Zhong', 'Qinglong Zhang', 'Xizhou Zhu', 'Lewei Lu', 'Bin Li', 'Ping Luo', 'Tong Lu', 'Yu Qiao', 'Jifeng Dai'] | ['cs.CV'] | The exponential growth of large language models (LLMs) has opened up numerous
possibilities for multimodal AGI systems. However, the progress in vision and
vision-language foundation models, which are also critical elements of
multi-modal AGI, has not kept pace with LLMs. In this work, we design a
large-scale vision-la... | 2023-12-21T18:59:31Z | 25 pages, 5 figures, 28 tables | null | null | Intern VL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks | ['Zhe Chen', 'Jiannan Wu', 'Wenhai Wang', 'Weijie Su', 'Guo Chen', 'Sen Xing', 'Zhong Muyan', 'Qinglong Zhang', 'Xizhou Zhu', 'Lewei Lu', 'Bin Li', 'Ping Luo', 'Tong Lu', 'Yu Qiao', 'Jifeng Dai'] | 2,023 | Computer Vision and Pattern Recognition | 1,217 | 190 | ['Computer Science'] |
2,312.1448 | MetaAID 2.5: A Secure Framework for Developing Metaverse Applications
via Large Language Models | ['Hongyin Zhu'] | ['cs.CR', 'cs.CL', 'cs.CY'] | Large language models (LLMs) are increasingly being used in Metaverse
environments to generate dynamic and realistic content and to control the
behavior of non-player characters (NPCs). However, the cybersecurity concerns
associated with LLMs have become increasingly prominent. Previous research has
primarily focused o... | 2023-12-22T07:15:55Z | null | null | null | null | null | null | null | null | null | null |
2,312.14557 | Aurora:Activating Chinese chat capability for Mixtral-8x7B sparse
Mixture-of-Experts through Instruction-Tuning | ['Rongsheng Wang', 'Haoming Chen', 'Ruizhe Zhou', 'Yaofei Duan', 'Kunyan Cai', 'Han Ma', 'Jiaxi Cui', 'Jian Li', 'Patrick Cheong-Iao Pang', 'Yapeng Wang', 'Tao Tan'] | ['cs.CL'] | Existing research has demonstrated that refining large language models (LLMs)
through the utilization of machine-generated instruction-following data
empowers these models to exhibit impressive zero-shot capabilities for novel
tasks, without requiring human-authored instructions. In this paper, we
systematically invest... | 2023-12-22T09:30:41Z | 10 pages, 2 figures | null | null | null | null | null | null | null | null | null |
2,312.14591 | Reasons to Reject? Aligning Language Models with Judgments | ['Weiwen Xu', 'Deng Cai', 'Zhisong Zhang', 'Wai Lam', 'Shuming Shi'] | ['cs.CL'] | As humans, we consistently interact with our peers and receive feedback in
the form of natural language. This language feedback allows us to maintain
appropriate behavior, and rectify potential errors. The question arises
naturally: can we use language feedback to align large language models (LLMs)?
In contrast to prev... | 2023-12-22T10:29:43Z | Accepted at ACL 2024 Findings. Our source codes and models are
publicly available at https://github.com/wwxu21/CUT | null | null | Reasons to Reject? Aligning Language Models with Judgments | ['Weiwen Xu', 'Deng Cai', 'Zhisong Zhang', 'Wai Lam', 'Shuming Shi'] | 2,023 | Annual Meeting of the Association for Computational Linguistics | 15 | 69 | ['Computer Science'] |
2,312.14708 | Balancing the Style-Content Trade-Off in Sentiment Transfer Using
Polarity-Aware Denoising | ['Sourabrata Mukherjee', 'Zdeněk Kasner', 'Ondřej Dušek'] | ['cs.CL'] | Text sentiment transfer aims to flip the sentiment polarity of a sentence
(positive to negative or vice versa) while preserving its sentiment-independent
content. Although current models show good results at changing the sentiment,
content preservation in transferred sentences is insufficient. In this paper,
we present... | 2023-12-22T14:06:54Z | Published in 25th International Conference on Text, Speech and
Dialogue (TSD 2022) | null | null | Balancing the Style-Content Trade-Off in Sentiment Transfer Using Polarity-Aware Denoising | ['Sourabrata Mukherjee', 'Zdeněk Kasner', 'Ondrej Dusek'] | 2,023 | International Conference on Text, Speech and Dialogue | 12 | 37 | ['Computer Science'] |
2,312.14852 | TACO: Topics in Algorithmic COde generation dataset | ['Rongao Li', 'Jie Fu', 'Bo-Wen Zhang', 'Tao Huang', 'Zhihong Sun', 'Chen Lyu', 'Guang Liu', 'Zhi Jin', 'Ge Li'] | ['cs.AI'] | We introduce TACO, an open-source, large-scale code generation dataset, with
a focus on the optics of algorithms, designed to provide a more challenging
training dataset and evaluation benchmark in the field of code generation
models. TACO includes competition-level programming questions that are more
challenging, to e... | 2023-12-22T17:25:42Z | null | null | null | null | null | null | null | null | null | null |
2,312.14862 | YAYI 2: Multilingual Open-Source Large Language Models | ['Yin Luo', 'Qingchao Kong', 'Nan Xu', 'Jia Cao', 'Bao Hao', 'Baoyu Qu', 'Bo Chen', 'Chao Zhu', 'Chenyang Zhao', 'Donglei Zhang', 'Fan Feng', 'Feifei Zhao', 'Hailong Sun', 'Hanxuan Yang', 'Haojun Pan', 'Hongyu Liu', 'Jianbin Guo', 'Jiangtao Du', 'Jingyi Wang', 'Junfeng Li', 'Lei Sun', 'Liduo Liu', 'Lifeng Dong', 'Lili ... | ['cs.CL', 'cs.AI'] | As the latest advancements in natural language processing, large language
models (LLMs) have achieved human-level language understanding and generation
abilities in many real-world tasks, and even have been regarded as a potential
path to the artificial general intelligence. To better facilitate research on
LLMs, many ... | 2023-12-22T17:34:47Z | null | null | null | null | null | null | null | null | null | null |
2,312.15166 | SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective
Depth Up-Scaling | ['Dahyun Kim', 'Chanjun Park', 'Sanghoon Kim', 'Wonsung Lee', 'Wonho Song', 'Yunsu Kim', 'Hyeonwoo Kim', 'Yungi Kim', 'Hyeonju Lee', 'Jihoo Kim', 'Changbae Ahn', 'Seonghoon Yang', 'Sukyung Lee', 'Hyunbyung Park', 'Gyoungjin Gim', 'Mikyoung Cha', 'Hwalsuk Lee', 'Sunghun Kim'] | ['cs.CL', 'cs.AI', 'cs.LG'] | We introduce SOLAR 10.7B, a large language model (LLM) with 10.7 billion
parameters, demonstrating superior performance in various natural language
processing (NLP) tasks. Inspired by recent efforts to efficiently up-scale
LLMs, we present a method for scaling LLMs called depth up-scaling (DUS), which
encompasses depth... | 2023-12-23T05:11:37Z | accepted to NAACL 2024 Industry Track | null | null | SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling | ['Dahyun Kim', 'Chanjun Park', 'Sanghoon Kim', 'Wonsung Lee', 'Wonho Song', 'Yunsu Kim', 'Hyeonwoo Kim', 'Yungi Kim', 'Hyeonju Lee', 'Jihoo Kim', 'Changbae Ahn', 'Seonghoon Yang', 'Sukyung Lee', 'Hyunbyung Park', 'Gyoungjin Gim', 'Mikyoung Cha', 'Hwalsuk Lee', 'Sunghun Kim'] | 2,023 | North American Chapter of the Association for Computational Linguistics | 150 | 52 | ['Computer Science'] |
2,312.15185 | emotion2vec: Self-Supervised Pre-Training for Speech Emotion
Representation | ['Ziyang Ma', 'Zhisheng Zheng', 'Jiaxin Ye', 'Jinchao Li', 'Zhifu Gao', 'Shiliang Zhang', 'Xie Chen'] | ['cs.CL', 'cs.HC', 'cs.MM', 'cs.SD', 'eess.AS'] | We propose emotion2vec, a universal speech emotion representation model.
emotion2vec is pre-trained on open-source unlabeled emotion data through
self-supervised online distillation, combining utterance-level loss and
frame-level loss during pre-training. emotion2vec outperforms state-of-the-art
pre-trained universal m... | 2023-12-23T07:46:55Z | Code, checkpoints, and extracted features are available at
https://github.com/ddlBoJack/emotion2vec | null | null | null | null | null | null | null | null | null |
2,312.15503 | Making Large Language Models A Better Foundation For Dense Retrieval | ['Chaofan Li', 'Zheng Liu', 'Shitao Xiao', 'Yingxia Shao'] | ['cs.CL'] | Dense retrieval needs to learn discriminative text embeddings to represent
the semantic relationship between query and document. It may benefit from the
using of large language models (LLMs), given LLMs' strong capability on
semantic understanding. However, the LLMs are pre-trained by text generation
tasks, whose worki... | 2023-12-24T15:10:35Z | null | null | null | null | null | null | null | null | null | null |
2,312.15548 | YAYI-UIE: A Chat-Enhanced Instruction Tuning Framework for Universal
Information Extraction | ['Xinglin Xiao', 'Yijie Wang', 'Nan Xu', 'Yuqi Wang', 'Hanxuan Yang', 'Minzheng Wang', 'Yin Luo', 'Lei Wang', 'Wenji Mao', 'Daniel Zeng'] | ['cs.CL', 'cs.AI'] | The difficulty of the information extraction task lies in dealing with the
task-specific label schemas and heterogeneous data structures. Recent work has
proposed methods based on large language models to uniformly model different
information extraction tasks. However, these existing methods are deficient in
their info... | 2023-12-24T21:33:03Z | null | null | null | YAYI-UIE: A Chat-Enhanced Instruction Tuning Framework for Universal Information Extraction | ['Xinglin Xiao', 'Yijie Wang', 'Nan Xu', 'Yuqi Wang', 'Hanxuan Yang', 'Minzheng Wang', 'Yin Luo', 'Lei Wang', 'Wenji Mao', 'Daniel Zeng'] | 2,023 | arXiv.org | 21 | 35 | ['Computer Science'] |
2,312.15685 | What Makes Good Data for Alignment? A Comprehensive Study of Automatic
Data Selection in Instruction Tuning | ['Wei Liu', 'Weihao Zeng', 'Keqing He', 'Yong Jiang', 'Junxian He'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Instruction tuning is a standard technique employed to align large language
models to end tasks and user preferences after the initial pretraining phase.
Recent research indicates the critical role of data engineering in instruction
tuning -- when appropriately selected, only limited data is necessary to
achieve superi... | 2023-12-25T10:29:28Z | ICLR2024 Camera Ready. Data and model checkpoints are available at
https://github.com/hkust-nlp/deita | null | null | null | null | null | null | null | null | null |
2,312.15686 | PULASki: Learning inter-rater variability using statistical distances to
improve probabilistic segmentation | ['Soumick Chatterjee', 'Franziska Gaidzik', 'Alessandro Sciarra', 'Hendrik Mattern', 'Gábor Janiga', 'Oliver Speck', 'Andreas Nürnberger', 'Sahani Pathiraja'] | ['cs.CV', 'cs.AI', 'cs.HC', 'cs.LG'] | In the domain of medical imaging, many supervised learning based methods for
segmentation face several challenges such as high variability in annotations
from multiple experts, paucity of labelled data and class imbalanced datasets.
These issues may result in segmentations that lack the requisite precision for
clinical... | 2023-12-25T10:31:22Z | null | Medical Image Analysis (2025): 103623 | 10.1016/j.media.2025.103623 | PULASki: Learning inter-rater variability using statistical distances to improve probabilistic segmentation | ['S. Chatterjee', 'Franziska Gaidzik', 'Alessandro Sciarra', 'H. Mattern', 'G. Janiga', 'Oliver Speck', 'Andreas Nürnberger', 'S. Pathiraja'] | 2,023 | Medical Image Anal. | 0 | 58 | ['Computer Science', 'Medicine'] |
2,312.15692 | Instruction Fusion: Advancing Prompt Evolution through Hybridization | ['Weidong Guo', 'Jiuding Yang', 'Kaitong Yang', 'Xiangyang Li', 'Zhuwei Rao', 'Yu Xu', 'Di Niu'] | ['cs.AI'] | The fine-tuning of Large Language Models (LLMs) specialized in code
generation has seen notable advancements through the use of open-domain coding
queries. Despite the successes, existing methodologies like Evol-Instruct
encounter performance limitations, impeding further enhancements in code
generation tasks. This pap... | 2023-12-25T11:00:37Z | null | null | null | null | null | null | null | null | null | null |
2,312.1571 | Alleviating Hallucinations of Large Language Models through Induced
Hallucinations | ['Yue Zhang', 'Leyang Cui', 'Wei Bi', 'Shuming Shi'] | ['cs.CL', 'cs.AI'] | Despite their impressive capabilities, large language models (LLMs) have been
observed to generate responses that include inaccurate or fabricated
information, a phenomenon commonly known as ``hallucination''. In this work, we
propose a simple \textit{Induce-then-Contrast} Decoding (ICD) strategy to
alleviate hallucina... | 2023-12-25T12:32:49Z | Work in progress | null | null | Alleviating Hallucinations of Large Language Models through Induced Hallucinations | ['Yue Zhang', 'Leyang Cui', 'Wei Bi', 'Shuming Shi'] | 2,023 | North American Chapter of the Association for Computational Linguistics | 57 | 65 | ['Computer Science'] |
2,312.15713 | PersianLLaMA: Towards Building First Persian Large Language Model | ['Mohammad Amin Abbasi', 'Arash Ghafouri', 'Mahdi Firouzmandi', 'Hassan Naderi', 'Behrouz Minaei Bidgoli'] | ['cs.CL', 'cs.AI'] | Despite the widespread use of the Persian language by millions globally,
limited efforts have been made in natural language processing for this
language. The use of large language models as effective tools in various
natural language processing tasks typically requires extensive textual data and
robust hardware resourc... | 2023-12-25T12:48:55Z | null | null | null | null | null | null | null | null | null | null |
2,312.15861 | Towards Squeezing-Averse Virtual Try-On via Sequential Deformation | ['Sang-Heon Shim', 'Jiwoo Chung', 'Jae-Pil Heo'] | ['cs.CV'] | In this paper, we first investigate a visual quality degradation problem
observed in recent high-resolution virtual try-on approach. The tendency is
empirically found that the textures of clothes are squeezed at the sleeve, as
visualized in the upper row of Fig.1(a). A main reason for the issue arises
from a gradient c... | 2023-12-26T03:02:01Z | Accepted to AAAI 2024 | null | null | null | null | null | null | null | null | null |
2,312.1596 | MoTCoder: Elevating Large Language Models with Modular of Thought for
Challenging Programming Tasks | ['Jingyao Li', 'Pengguang Chen', 'Bin Xia', 'Hong Xu', 'Jiaya Jia'] | ['cs.LG', 'cs.PL', 'cs.SE'] | Large Language Models (LLMs) have showcased impressive capabilities in
handling straightforward programming tasks. However, their performance tends to
falter when confronted with more challenging programming problems. We observe
that conventional models often generate solutions as monolithic code blocks,
restricting th... | 2023-12-26T08:49:57Z | Data:
https://huggingface.co/datasets/JingyaoLi/MoTCode-Data,MoTCoder-32B:
https://huggingface.co/JingyaoLi/MoTCoder-32B-V1.5,MoTCoder-7B:
https://huggingface.co/JingyaoLi/MoTCoder-7B-v1.5,Code:
https://github.com/dvlab-research/MoTCoder, Paper: arXiv:2312.15960 | null | null | null | null | null | null | null | null | null |
2,312.15997 | Aligning Large Language Models with Human Preferences through
Representation Engineering | ['Wenhao Liu', 'Xiaohua Wang', 'Muling Wu', 'Tianlong Li', 'Changze Lv', 'Zixuan Ling', 'Jianhao Zhu', 'Cenyuan Zhang', 'Xiaoqing Zheng', 'Xuanjing Huang'] | ['cs.CL'] | Aligning large language models (LLMs) with human preferences is crucial for
enhancing their utility in terms of helpfulness, truthfulness, safety,
harmlessness, and interestingness. Existing methods for achieving this
alignment often involves employing reinforcement learning from human feedback
(RLHF) to fine-tune LLMs... | 2023-12-26T11:01:36Z | null | null | null | Aligning Large Language Models with Human Preferences through Representation Engineering | ['Wenhao Liu', 'Xiaohua Wang', 'Muling Wu', 'Tianlong Li', 'Changze Lv', 'Zixuan Ling', 'Jianhao Zhu', 'Cenyuan Zhang', 'Xiaoqing Zheng', 'Xuanjing Huang'] | 2,023 | Annual Meeting of the Association for Computational Linguistics | 41 | 49 | ['Computer Science'] |
2,312.16044 | LLMLight: Large Language Models as Traffic Signal Control Agents | ['Siqi Lai', 'Zhao Xu', 'Weijia Zhang', 'Hao Liu', 'Hui Xiong'] | ['cs.AI'] | Traffic Signal Control (TSC) is a crucial component in urban traffic
management, aiming to optimize road network efficiency and reduce congestion.
Traditional TSC methods, primarily based on transportation engineering and
reinforcement learning (RL), often struggle with generalization abilities
across varied traffic sc... | 2023-12-26T13:17:06Z | null | null | null | LLMLight: Large Language Models as Traffic Signal Control Agents | ['Siqi Lai', 'Zhao Xu', 'Weijiao Zhang', 'Hao Liu', 'Hui Xiong'] | 2,023 | Knowledge Discovery and Data Mining | 14 | 52 | ['Computer Science'] |
2,312.16108 | LaneSegNet: Map Learning with Lane Segment Perception for Autonomous
Driving | ['Tianyu Li', 'Peijin Jia', 'Bangjun Wang', 'Li Chen', 'Kun Jiang', 'Junchi Yan', 'Hongyang Li'] | ['cs.CV'] | A map, as crucial information for downstream applications of an autonomous
driving system, is usually represented in lanelines or centerlines. However,
existing literature on map learning primarily focuses on either detecting
geometry-based lanelines or perceiving topology relationships of centerlines.
Both of these me... | 2023-12-26T16:22:10Z | Accepted in ICLR 2024 | null | null | LaneSegNet: Map Learning with Lane Segment Perception for Autonomous Driving | ['Tianyu Li', 'Peijin Jia', 'Bangjun Wang', 'Li Chen', 'Kun Jiang', 'Junchi Yan', 'Hongyang Li'] | 2,023 | International Conference on Learning Representations | 38 | 35 | ['Computer Science'] |
2,312.16144 | Towards Better Monolingual Japanese Retrievers with Multi-Vector Models | ['Benjamin Clavié'] | ['cs.CL', 'cs.AI'] | As language-specific training data tends to be sparsely available compared to
English, document retrieval in many languages has been largely relying on
multilingual models. In Japanese, the best performing deep-learning based
retrieval approaches rely on multilingual dense embedders, with Japanese-only
models lagging f... | 2023-12-26T18:07:05Z | null | null | null | Towards Better Monolingual Japanese Retrievers with Multi-Vector Models | ["Benjamin Clavi'e"] | 2,023 | null | 1 | 31 | ['Computer Science'] |
2,312.16145 | One-Dimensional Adapter to Rule Them All: Concepts, Diffusion Models and
Erasing Applications | ['Mengyao Lyu', 'Yuhong Yang', 'Haiwen Hong', 'Hui Chen', 'Xuan Jin', 'Yuan He', 'Hui Xue', 'Jungong Han', 'Guiguang Ding'] | ['cs.CV', 'cs.AI', 'cs.LG'] | The prevalent use of commercial and open-source diffusion models (DMs) for
text-to-image generation prompts risk mitigation to prevent undesired
behaviors. Existing concept erasing methods in academia are all based on full
parameter or specification-based fine-tuning, from which we observe the
following issues: 1) Gene... | 2023-12-26T18:08:48Z | CVPR 2024 | null | null | One-dimensional Adapter to Rule Them All: Concepts, Diffusion Models and Erasing Applications | ['Mengyao Lyu', 'Yuhong Yang', 'Haiwen Hong', 'Hui Chen', 'Xuan Jin', 'Yuan He', 'Hui Xue', 'Jungong Han', 'Guiguang Ding'] | 2,023 | Computer Vision and Pattern Recognition | 67 | 50 | ['Computer Science'] |
2,312.16693 | I2V-Adapter: A General Image-to-Video Adapter for Diffusion Models | ['Xun Guo', 'Mingwu Zheng', 'Liang Hou', 'Yuan Gao', 'Yufan Deng', 'Pengfei Wan', 'Di Zhang', 'Yufan Liu', 'Weiming Hu', 'Zhengjun Zha', 'Haibin Huang', 'Chongyang Ma'] | ['cs.CV'] | Text-guided image-to-video (I2V) generation aims to generate a coherent video
that preserves the identity of the input image and semantically aligns with the
input prompt. Existing methods typically augment pretrained text-to-video (T2V)
models by either concatenating the image with noised video frames channel-wise
bef... | 2023-12-27T19:11:50Z | null | null | null | null | null | null | null | null | null | null |
2,312.16862 | TinyGPT-V: Efficient Multimodal Large Language Model via Small Backbones | ['Zhengqing Yuan', 'Zhaoxu Li', 'Weiran Huang', 'Yanfang Ye', 'Lichao Sun'] | ['cs.CV', 'cs.CL'] | In recent years, multimodal large language models (MLLMs) such as GPT-4V have
demonstrated remarkable advancements, excelling in a variety of vision-language
tasks. Despite their prowess, the closed-source nature and computational
demands of such models limit their accessibility and applicability. This study
introduces... | 2023-12-28T07:11:41Z | Accepted by ICML workshop 2024 | null | null | TinyGPT-V: Efficient Multimodal Large Language Model via Small Backbones | ['Zhengqing Yuan', 'Zhaoxu Li', 'Lichao Sun'] | 2,023 | arXiv.org | 55 | 0 | ['Computer Science'] |
2,312.16886 | MobileVLM : A Fast, Strong and Open Vision Language Assistant for Mobile
Devices | ['Xiangxiang Chu', 'Limeng Qiao', 'Xinyang Lin', 'Shuang Xu', 'Yang Yang', 'Yiming Hu', 'Fei Wei', 'Xinyu Zhang', 'Bo Zhang', 'Xiaolin Wei', 'Chunhua Shen'] | ['cs.CV'] | We present MobileVLM, a competent multimodal vision language model (MMVLM)
targeted to run on mobile devices. It is an amalgamation of a myriad of
architectural designs and techniques that are mobile-oriented, which comprises
a set of language models at the scale of 1.4B and 2.7B parameters, trained from
scratch, a mul... | 2023-12-28T08:21:24Z | Tech Report | null | null | null | null | null | null | null | null | null |
2,312.1709 | Q-Align: Teaching LMMs for Visual Scoring via Discrete Text-Defined
Levels | ['Haoning Wu', 'Zicheng Zhang', 'Weixia Zhang', 'Chaofeng Chen', 'Liang Liao', 'Chunyi Li', 'Yixuan Gao', 'Annan Wang', 'Erli Zhang', 'Wenxiu Sun', 'Qiong Yan', 'Xiongkuo Min', 'Guangtao Zhai', 'Weisi Lin'] | ['cs.CV', 'cs.CL', 'cs.LG'] | The explosion of visual content available online underscores the requirement
for an accurate machine assessor to robustly evaluate scores across diverse
types of visual contents. While recent studies have demonstrated the
exceptional potentials of large multi-modality models (LMMs) on a wide range of
related fields, in... | 2023-12-28T16:10:25Z | Technical Report | null | null | Q-Align: Teaching LMMs for Visual Scoring via Discrete Text-Defined Levels | ['Haoning Wu', 'Zicheng Zhang', 'Weixia Zhang', 'Chaofeng Chen', 'Liang Liao', 'Chunyi Li', 'Yixuan Gao', 'Annan Wang', 'Erli Zhang', 'Wenxiu Sun', 'Qiong Yan', 'Xiongkuo Min', 'Guangtao Zhai', 'Weisi Lin'] | 2,023 | International Conference on Machine Learning | 163 | 44 | ['Computer Science'] |
2,312.17183 | One Model to Rule them All: Towards Universal Segmentation for Medical
Images with Text Prompts | ['Ziheng Zhao', 'Yao Zhang', 'Chaoyi Wu', 'Xiaoman Zhang', 'Ya Zhang', 'Yanfeng Wang', 'Weidi Xie'] | ['eess.IV', 'cs.CV'] | In this study, we aim to build up a model that can Segment Anything in
radiology scans, driven by medical terminologies as Text prompts, termed as
SAT. Our main contributions are three folds: (i) for dataset construction, we
construct the first multi-modal knowledge tree on human anatomy, including 6502
anatomical term... | 2023-12-28T18:16:00Z | 69 pages | null | null | One Model to Rule them All: Towards Universal Segmentation for Medical Images with Text Prompts | ['Ziheng Zhao', 'Yao Zhang', 'Chaoyi Wu', 'Xiaoman Zhang', 'Ya Zhang', 'Yanfeng Wang', 'Weidi Xie'] | 2,023 | arXiv.org | 42 | 77 | ['Engineering', 'Computer Science'] |
2,312.1724 | LISA++: An Improved Baseline for Reasoning Segmentation with Large
Language Model | ['Senqiao Yang', 'Tianyuan Qu', 'Xin Lai', 'Zhuotao Tian', 'Bohao Peng', 'Shu Liu', 'Jiaya Jia'] | ['cs.CV'] | While LISA effectively bridges the gap between segmentation and large
language models to enable reasoning segmentation, it poses certain limitations:
unable to distinguish different instances of the target region, and constrained
by the pre-defined textual response formats. In this work, we introduce LISA++,
an update ... | 2023-12-28T18:58:33Z | Typo fixed | null | null | null | null | null | null | null | null | null |
2,312.17279 | Stateful Conformer with Cache-based Inference for Streaming Automatic
Speech Recognition | ['Vahid Noroozi', 'Somshubra Majumdar', 'Ankur Kumar', 'Jagadeesh Balam', 'Boris Ginsburg'] | ['cs.CL', 'eess.AS'] | In this paper, we propose an efficient and accurate streaming speech
recognition model based on the FastConformer architecture. We adapted the
FastConformer architecture for streaming applications through: (1) constraining
both the look-ahead and past contexts in the encoder, and (2) introducing an
activation caching m... | 2023-12-27T21:04:26Z | Shorter version accepted to ICASSP 2024 | null | null | null | null | null | null | null | null | null |
2,312.17432 | Video Understanding with Large Language Models: A Survey | ['Yunlong Tang', 'Jing Bi', 'Siting Xu', 'Luchuan Song', 'Susan Liang', 'Teng Wang', 'Daoan Zhang', 'Jie An', 'Jingyang Lin', 'Rongyi Zhu', 'Ali Vosoughi', 'Chao Huang', 'Zeliang Zhang', 'Pinxin Liu', 'Mingqian Feng', 'Feng Zheng', 'Jianguo Zhang', 'Ping Luo', 'Jiebo Luo', 'Chenliang Xu'] | ['cs.CV', 'cs.CL'] | With the burgeoning growth of online video platforms and the escalating
volume of video content, the demand for proficient video understanding tools
has intensified markedly. Given the remarkable capabilities of large language
models (LLMs) in language and multimodal tasks, this survey provides a detailed
overview of r... | 2023-12-29T01:56:17Z | Accepted by IEEE TCSVT | null | null | Video Understanding with Large Language Models: A Survey | ['Yunlong Tang', 'Jing Bi', 'Siting Xu', 'Luchuan Song', 'Susan Liang', 'Teng Wang', 'Daoan Zhang', 'Jie An', 'Jingyang Lin', 'Rongyi Zhu', 'A. Vosoughi', 'Chao Huang', 'Zeliang Zhang', 'Feng Zheng', 'Jianguo Zhang', 'Ping Luo', 'Jiebo Luo', 'Chenliang Xu'] | 2,023 | IEEE transactions on circuits and systems for video technology (Print) | 100 | 429 | ['Computer Science'] |
2,312.17482 | MosaicBERT: A Bidirectional Encoder Optimized for Fast Pretraining | ['Jacob Portes', 'Alex Trott', 'Sam Havens', 'Daniel King', 'Abhinav Venigalla', 'Moin Nadeem', 'Nikhil Sardana', 'Daya Khudia', 'Jonathan Frankle'] | ['cs.CL', 'cs.LG'] | Although BERT-style encoder models are heavily used in NLP research, many
researchers do not pretrain their own BERTs from scratch due to the high cost
of training. In the past half-decade since BERT first rose to prominence, many
advances have been made with other transformer architectures and training
configurations ... | 2023-12-29T06:05:19Z | 10 pages, 4 figures in main text. 25 pages total | NeurIPS 2023 | null | null | null | null | null | null | null | null |
2,312.17543 | Building Efficient Universal Classifiers with Natural Language Inference | ['Moritz Laurer', 'Wouter van Atteveldt', 'Andreu Casas', 'Kasper Welbers'] | ['cs.CL', 'cs.AI'] | Generative Large Language Models (LLMs) have become the mainstream choice for
fewshot and zeroshot learning thanks to the universality of text generation.
Many users, however, do not need the broad capabilities of generative LLMs when
they only want to automate a classification task. Smaller BERT-like models can
also l... | 2023-12-29T10:18:36Z | null | null | null | null | null | null | null | null | null | null |
2,401.00096 | A foundation model for atomistic materials chemistry | ['Ilyes Batatia', 'Philipp Benner', 'Yuan Chiang', 'Alin M. Elena', 'Dávid P. Kovács', 'Janosh Riebesell', 'Xavier R. Advincula', 'Mark Asta', 'Matthew Avaylon', 'William J. Baldwin', 'Fabian Berger', 'Noam Bernstein', 'Arghya Bhowmik', 'Samuel M. Blau', 'Vlad Cărare', 'James P. Darby', 'Sandip De', 'Flaviano Della Pia... | ['physics.chem-ph', 'cond-mat.mtrl-sci'] | Machine-learned force fields have transformed the atomistic modelling of
materials by enabling simulations of ab initio quality on unprecedented time
and length scales. However, they are currently limited by: (i) the significant
computational and human effort that must go into development and validation of
potentials f... | 2023-12-29T23:08:59Z | 119 pages, 63 figures, 37MB PDF | null | null | null | null | null | null | null | null | null |
2,401.0011 | Diffusion Model with Perceptual Loss | ['Shanchuan Lin', 'Xiao Yang'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Diffusion models without guidance generate very unrealistic samples. Guidance
is used ubiquitously, and previous research has attributed its effect to
low-temperature sampling that improves quality by trading off diversity.
However, this perspective is incomplete. Our research shows that the choice of
the loss objectiv... | 2023-12-30T01:24:25Z | null | null | null | Diffusion Model with Perceptual Loss | ['Shanchuan Lin', 'Xiao Yang'] | 2,023 | arXiv.org | 17 | 63 | ['Computer Science'] |
2,401.0017 | L3Cube-MahaSocialNER: A Social Media based Marathi NER Dataset and BERT
models | ['Harsh Chaudhari', 'Anuja Patil', 'Dhanashree Lavekar', 'Pranav Khairnar', 'Raviraj Joshi'] | ['cs.CL', 'cs.LG'] | This work introduces the L3Cube-MahaSocialNER dataset, the first and largest
social media dataset specifically designed for Named Entity Recognition (NER)
in the Marathi language. The dataset comprises 18,000 manually labeled
sentences covering eight entity classes, addressing challenges posed by social
media data, inc... | 2023-12-30T08:30:24Z | Accepted at Forum for Information Retrieval Evaluation (FIRE 2023) | null | 10.1145/3632754.3632764 | null | null | null | null | null | null | null |
2,401.00248 | Promoting Segment Anything Model towards Highly Accurate Dichotomous
Image Segmentation | ['Xianjie Liu', 'Keren Fu', 'Yao Jiang', 'Qijun Zhao'] | ['cs.CV', 'cs.AI'] | The Segment Anything Model (SAM) represents a significant breakthrough into
foundation models for computer vision, providing a large-scale image
segmentation model. However, despite SAM's zero-shot performance, its
segmentation masks lack fine-grained details, particularly in accurately
delineating object boundaries. T... | 2023-12-30T14:24:33Z | null | null | null | null | null | null | null | null | null | null |
2,401.00368 | Improving Text Embeddings with Large Language Models | ['Liang Wang', 'Nan Yang', 'Xiaolong Huang', 'Linjun Yang', 'Rangan Majumder', 'Furu Wei'] | ['cs.CL', 'cs.IR'] | In this paper, we introduce a novel and simple method for obtaining
high-quality text embeddings using only synthetic data and less than 1k
training steps. Unlike existing methods that often depend on multi-stage
intermediate pre-training with billions of weakly-supervised text pairs,
followed by fine-tuning with a few... | 2023-12-31T02:13:18Z | Accepted by ACL 2024 | null | null | null | null | null | null | null | null | null |
2,401.00374 | EMAGE: Towards Unified Holistic Co-Speech Gesture Generation via
Expressive Masked Audio Gesture Modeling | ['Haiyang Liu', 'Zihao Zhu', 'Giorgio Becherini', 'Yichen Peng', 'Mingyang Su', 'You Zhou', 'Xuefei Zhe', 'Naoya Iwamoto', 'Bo Zheng', 'Michael J. Black'] | ['cs.CV'] | We propose EMAGE, a framework to generate full-body human gestures from audio
and masked gestures, encompassing facial, local body, hands, and global
movements. To achieve this, we first introduce BEAT2 (BEAT-SMPLX-FLAME), a new
mesh-level holistic co-speech dataset. BEAT2 combines a MoShed SMPL-X body with
FLAME head ... | 2023-12-31T02:25:41Z | Fix typos; Conflict of Interest Disclosure; CVPR Camera Ready;
Project Page: https://pantomatrix.github.io/EMAGE/ | null | null | null | null | null | null | null | null | null |
2,401.00396 | RAGTruth: A Hallucination Corpus for Developing Trustworthy
Retrieval-Augmented Language Models | ['Cheng Niu', 'Yuanhao Wu', 'Juno Zhu', 'Siliang Xu', 'Kashun Shum', 'Randy Zhong', 'Juntong Song', 'Tong Zhang'] | ['cs.CL'] | Retrieval-augmented generation (RAG) has become a main technique for
alleviating hallucinations in large language models (LLMs). Despite the
integration of RAG, LLMs may still present unsupported or contradictory claims
to the retrieved contents. In order to develop effective hallucination
prevention strategies under R... | 2023-12-31T04:43:45Z | null | null | null | RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models | ['Yuanhao Wu', 'Juno Zhu', 'Siliang Xu', 'Kashun Shum', 'Cheng Niu', 'Randy Zhong', 'Juntong Song', 'Tong Zhang'] | 2,023 | Annual Meeting of the Association for Computational Linguistics | 109 | 47 | ['Computer Science'] |
2,401.00434 | GeoGalactica: A Scientific Large Language Model in Geoscience | ['Zhouhan Lin', 'Cheng Deng', 'Le Zhou', 'Tianhang Zhang', 'Yi Xu', 'Yutong Xu', 'Zhongmou He', 'Yuanyuan Shi', 'Beiya Dai', 'Yunchong Song', 'Boyi Zeng', 'Qiyuan Chen', 'Yuxun Miao', 'Bo Xue', 'Shu Wang', 'Luoyi Fu', 'Weinan Zhang', 'Junxian He', 'Yunqiang Zhu', 'Xinbing Wang', 'Chenghu Zhou'] | ['cs.CL', 'I.2.7; F.4.1'] | Large language models (LLMs) have achieved huge success for their general
knowledge and ability to solve a wide spectrum of tasks in natural language
processing (NLP). Due to their impressive abilities, LLMs have shed light on
potential inter-discipline applications to foster scientific discoveries of a
specific domain... | 2023-12-31T09:22:54Z | null | null | null | null | null | null | null | null | null | null |
2,401.00789 | Retrieval-Augmented Egocentric Video Captioning | ['Jilan Xu', 'Yifei Huang', 'Junlin Hou', 'Guo Chen', 'Yuejie Zhang', 'Rui Feng', 'Weidi Xie'] | ['cs.CV'] | Understanding human actions from videos of first-person view poses
significant challenges. Most prior approaches explore representation learning
on egocentric videos only, while overlooking the potential benefit of
exploiting existing large-scale third-person videos. In this paper, (1) we
develop EgoInstructor, a retri... | 2024-01-01T15:31:06Z | CVPR 2024. Project page is available at:
https://jazzcharles.github.io/Egoinstructor/ | null | null | null | null | null | null | null | null | null |
2,401.01044 | Auffusion: Leveraging the Power of Diffusion and Large Language Models
for Text-to-Audio Generation | ['Jinlong Xue', 'Yayue Deng', 'Yingming Gao', 'Ya Li'] | ['cs.SD', 'cs.AI', 'cs.CL', 'eess.AS'] | Recent advancements in diffusion models and large language models (LLMs) have
significantly propelled the field of AIGC. Text-to-Audio (TTA), a burgeoning
AIGC application designed to generate audio from natural language prompts, is
attracting increasing attention. However, existing TTA studies often struggle
with gene... | 2024-01-02T05:42:14Z | Demo and implementation at https://auffusion.github.io | null | null | Auffusion: Leveraging the Power of Diffusion and Large Language Models for Text-to-Audio Generation | ['Jinlong Xue', 'Yayue Deng', 'Yingming Gao', 'Ya Li'] | 2,024 | IEEE/ACM Transactions on Audio Speech and Language Processing | 36 | 62 | ['Computer Science', 'Engineering'] |
2,401.01053 | Cheetah: Natural Language Generation for 517 African Languages | ['Ife Adebara', 'AbdelRahim Elmadany', 'Muhammad Abdul-Mageed'] | ['cs.CL'] | Low-resource African languages pose unique challenges for natural language
processing (NLP) tasks, including natural language generation (NLG). In this
paper, we develop Cheetah, a massively multilingual NLG language model for
African languages. Cheetah supports 517 African languages and language
varieties, allowing us... | 2024-01-02T06:24:13Z | null | null | null | null | null | null | null | null | null | null |
2,401.01089 | Quokka: An Open-source Large Language Model ChatBot for Material Science | ['Xianjun Yang', 'Stephen D. Wilson', 'Linda Petzold'] | ['cs.CL', 'cs.AI', 'cs.CE'] | This paper presents the development of a specialized chatbot for materials
science, leveraging the Llama-2 language model, and continuing pre-training on
the expansive research articles in the materials science domain from the S2ORC
dataset. The methodology involves an initial pretraining phase on over one
million doma... | 2024-01-02T08:14:48Z | Work in progress | null | null | Quokka: An Open-source Large Language Model ChatBot for Material Science | ['Xianjun Yang', 'Stephen Wilson', 'L. Petzold'] | 2,024 | arXiv.org | 2 | 30 | ['Computer Science'] |
2,401.01107 | CityPulse: Fine-Grained Assessment of Urban Change with Street View Time
Series | ['Tianyuan Huang', 'Zejia Wu', 'Jiajun Wu', 'Jackelyn Hwang', 'Ram Rajagopal'] | ['cs.CV'] | Urban transformations have profound societal impact on both individuals and
communities at large. Accurately assessing these shifts is essential for
understanding their underlying causes and ensuring sustainable urban planning.
Traditional measurements often encounter constraints in spatial and temporal
granularity, fa... | 2024-01-02T08:57:09Z | Accepted by AAAI 2024 | null | null | CityPulse: Fine-Grained Assessment of Urban Change with Street View Time Series | ['Tianyuan Huang', 'Zejia Wu', 'Jiajun Wu', 'Jackelyn Hwang', 'Ram Rajagopal'] | 2,024 | AAAI Conference on Artificial Intelligence | 4 | 36 | ['Computer Science'] |
2,401.01173 | En3D: An Enhanced Generative Model for Sculpting 3D Humans from 2D
Synthetic Data | ['Yifang Men', 'Biwen Lei', 'Yuan Yao', 'Miaomiao Cui', 'Zhouhui Lian', 'Xuansong Xie'] | ['cs.CV'] | We present En3D, an enhanced generative scheme for sculpting high-quality 3D
human avatars. Unlike previous works that rely on scarce 3D datasets or limited
2D collections with imbalanced viewing angles and imprecise pose priors, our
approach aims to develop a zero-shot 3D generative scheme capable of producing
visuall... | 2024-01-02T12:06:31Z | Project Page: https://menyifang.github.io/projects/En3D/index.html | null | null | null | null | null | null | null | null | null |
2,401.01335 | Self-Play Fine-Tuning Converts Weak Language Models to Strong Language
Models | ['Zixiang Chen', 'Yihe Deng', 'Huizhuo Yuan', 'Kaixuan Ji', 'Quanquan Gu'] | ['cs.LG', 'cs.AI', 'cs.CL', 'stat.ML'] | Harnessing the power of human-annotated data through Supervised Fine-Tuning
(SFT) is pivotal for advancing Large Language Models (LLMs). In this paper, we
delve into the prospect of growing a strong LLM out of a weak one without the
need for acquiring additional human-annotated data. We propose a new
fine-tuning method... | 2024-01-02T18:53:13Z | 22 pages, 6 figures, 7 tables. In ICML 2024 | null | null | null | null | null | null | null | null | null |
2,401.01456 | ColorizeDiffusion: Adjustable Sketch Colorization with Reference Image
and Text | ['Dingkun Yan', 'Liang Yuan', 'Erwin Wu', 'Yuma Nishioka', 'Issei Fujishiro', 'Suguru Saito'] | ['cs.CV'] | Diffusion models have recently demonstrated their effectiveness in generating
extremely high-quality images and are now utilized in a wide range of
applications, including automatic sketch colorization. Although many methods
have been developed for guided sketch colorization, there has been limited
exploration of the p... | 2024-01-02T22:46:12Z | null | null | null | null | null | null | null | null | null | null |
2,401.016 | PLLaMa: An Open-source Large Language Model for Plant Science | ['Xianjun Yang', 'Junfeng Gao', 'Wenxin Xue', 'Erik Alexandersson'] | ['cs.CL', 'cs.AI', 'cs.CE', 'cs.LG'] | Large Language Models (LLMs) have exhibited remarkable capabilities in
understanding and interacting with natural language across various sectors.
However, their effectiveness is limited in specialized areas requiring high
accuracy, such as plant science, due to a lack of specific expertise in these
fields. This paper ... | 2024-01-03T08:06:26Z | Work in progress | null | null | null | null | null | null | null | null | null |
2,401.01614 | GPT-4V(ision) is a Generalist Web Agent, if Grounded | ['Boyuan Zheng', 'Boyu Gou', 'Jihyung Kil', 'Huan Sun', 'Yu Su'] | ['cs.IR', 'cs.AI', 'cs.CL', 'cs.CV'] | The recent development on large multimodal models (LMMs), especially
GPT-4V(ision) and Gemini, has been quickly expanding the capability boundaries
of multimodal models beyond traditional tasks like image captioning and visual
question answering. In this work, we explore the potential of LMMs like GPT-4V
as a generalis... | 2024-01-03T08:33:09Z | null | null | null | null | null | null | null | null | null | null |
2,401.01651 | AIGCBench: Comprehensive Evaluation of Image-to-Video Content Generated
by AI | ['Fanda Fan', 'Chunjie Luo', 'Wanling Gao', 'Jianfeng Zhan'] | ['cs.CV', 'cs.AI'] | The burgeoning field of Artificial Intelligence Generated Content (AIGC) is
witnessing rapid advancements, particularly in video generation. This paper
introduces AIGCBench, a pioneering comprehensive and scalable benchmark
designed to evaluate a variety of video generation tasks, with a primary focus
on Image-to-Video... | 2024-01-03T10:08:40Z | Accepted to BenchCouncil Transactions on Benchmarks, Standards and
Evaluations (TBench) | null | null | AIGCBench: Comprehensive Evaluation of Image-to-Video Content Generated by AI | ['Fanda Fan', 'Chunjie Luo', 'Wanling Gao', 'Jianfeng Zhan'] | 2,024 | BenchCouncil Transactions on Benchmarks, Standards and Evaluations | 15 | 47 | ['Computer Science'] |
2,401.01808 | aMUSEd: An Open MUSE Reproduction | ['Suraj Patil', 'William Berman', 'Robin Rombach', 'Patrick von Platen'] | ['cs.CV'] | We present aMUSEd, an open-source, lightweight masked image model (MIM) for
text-to-image generation based on MUSE. With 10 percent of MUSE's parameters,
aMUSEd is focused on fast image generation. We believe MIM is under-explored
compared to latent diffusion, the prevailing approach for text-to-image
generation. Compa... | 2024-01-03T16:10:07Z | null | null | null | aMUSEd: An Open MUSE Reproduction | ['Suraj Patil', 'William Berman', 'Robin Rombach', 'Patrick von Platen'] | 2,024 | arXiv.org | 20 | 41 | ['Computer Science'] |
2,401.01916 | AstroLLaMA-Chat: Scaling AstroLLaMA with Conversational and Diverse
Datasets | ['Ernest Perkowski', 'Rui Pan', 'Tuan Dung Nguyen', 'Yuan-Sen Ting', 'Sandor Kruk', 'Tong Zhang', "Charlie O'Neill", 'Maja Jablonska', 'Zechang Sun', 'Michael J. Smith', 'Huiling Liu', 'Kevin Schawinski', 'Kartheik Iyer', 'Ioana Ciucă for UniverseTBD'] | ['astro-ph.IM', 'astro-ph.CO', 'astro-ph.GA', 'astro-ph.SR', 'cs.CL', 'cs.LG'] | We explore the potential of enhancing LLM performance in astronomy-focused
question-answering through targeted, continual pre-training. By employing a
compact 7B-parameter LLaMA-2 model and focusing exclusively on a curated set of
astronomy corpora -- comprising abstracts, introductions, and conclusions -- we
achieve n... | 2024-01-03T04:47:02Z | 4 pages, 1 figure, model is available at
https://huggingface.co/universeTBD, published in RNAAS | null | null | null | null | null | null | null | null | null |
2,401.01967 | A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO
and Toxicity | ['Andrew Lee', 'Xiaoyan Bai', 'Itamar Pres', 'Martin Wattenberg', 'Jonathan K. Kummerfeld', 'Rada Mihalcea'] | ['cs.CL', 'cs.AI'] | While alignment algorithms are now commonly used to tune pre-trained language
models towards a user's preferences, we lack explanations for the underlying
mechanisms in which models become ``aligned'', thus making it difficult to
explain phenomena like jailbreaks. In this work we study a popular algorithm,
direct prefe... | 2024-01-03T20:26:15Z | null | null | null | null | null | null | null | null | null | null |
2,401.02032 | DiffusionEdge: Diffusion Probabilistic Model for Crisp Edge Detection | ['Yunfan Ye', 'Kai Xu', 'Yuhang Huang', 'Renjiao Yi', 'Zhiping Cai'] | ['cs.CV'] | Limited by the encoder-decoder architecture, learning-based edge detectors
usually have difficulty predicting edge maps that satisfy both correctness and
crispness. With the recent success of the diffusion probabilistic model (DPM),
we found it is especially suitable for accurate and crisp edge detection since
the deno... | 2024-01-04T02:20:54Z | AAAI 2024 | null | null | null | null | null | null | null | null | null |
2,401.02072 | ICE-GRT: Instruction Context Enhancement by Generative Reinforcement
based Transformers | ['Chen Zheng', 'Ke Sun', 'Da Tang', 'Yukun Ma', 'Yuyu Zhang', 'Chenguang Xi', 'Xun Zhou'] | ['cs.CL'] | The emergence of Large Language Models (LLMs) such as ChatGPT and LLaMA
encounter limitations in domain-specific tasks, with these models often lacking
depth and accuracy in specialized areas, and exhibiting a decrease in general
capabilities when fine-tuned, particularly analysis ability in small sized
models. To addr... | 2024-01-04T05:47:41Z | null | null | null | null | null | null | null | null | null | null |
2,401.02254 | L3Cube-IndicNews: News-based Short Text and Long Document Classification
Datasets in Indic Languages | ['Aishwarya Mirashi', 'Srushti Sonavane', 'Purva Lingayat', 'Tejas Padhiyar', 'Raviraj Joshi'] | ['cs.CL', 'cs.LG'] | In this work, we introduce L3Cube-IndicNews, a multilingual text
classification corpus aimed at curating a high-quality dataset for Indian
regional languages, with a specific focus on news headlines and articles. We
have centered our work on 10 prominent Indic languages, including Hindi,
Bengali, Marathi, Telugu, Tamil... | 2024-01-04T13:11:17Z | Accepted at the International Conference on Natural Language
Processing (ICON 2023) | null | null | null | null | null | null | null | null | null |
2,401.0233 | LLaVA-Phi: Efficient Multi-Modal Assistant with Small Language Model | ['Yichen Zhu', 'Minjie Zhu', 'Ning Liu', 'Zhicai Ou', 'Xiaofeng Mou', 'Jian Tang'] | ['cs.CV', 'cs.CL'] | In this paper, we introduce LLaVA-$\phi$ (LLaVA-Phi), an efficient
multi-modal assistant that harnesses the power of the recently advanced small
language model, Phi-2, to facilitate multi-modal dialogues. LLaVA-Phi marks a
notable advancement in the realm of compact multi-modal models. It demonstrates
that even smaller... | 2024-01-04T16:07:43Z | The datasets were incomplete as they did not include all the
necessary copyrights | null | null | LLaVA-Phi: Efficient Multi-Modal Assistant with Small Language Model | ['Yichen Zhu', 'Minjie Zhu', 'Ning Liu', 'Zhiyuan Xu', 'Yaxin Peng'] | 2,024 | Proceedings of the 1st International Workshop on Efficient Multimedia Computing under Limited | 103 | 44 | ['Computer Science'] |
2,401.02385 | TinyLlama: An Open-Source Small Language Model | ['Peiyuan Zhang', 'Guangtao Zeng', 'Tianduo Wang', 'Wei Lu'] | ['cs.CL', 'cs.AI'] | We present TinyLlama, a compact 1.1B language model pretrained on around 1
trillion tokens for approximately 3 epochs. Building on the architecture and
tokenizer of Llama 2, TinyLlama leverages various advances contributed by the
open-source community (e.g., FlashAttention and Lit-GPT), achieving better
computational e... | 2024-01-04T17:54:59Z | Technical Report | null | null | null | null | null | null | null | null | null |
2,401.024 | Learning the 3D Fauna of the Web | ['Zizhang Li', 'Dor Litvak', 'Ruining Li', 'Yunzhi Zhang', 'Tomas Jakab', 'Christian Rupprecht', 'Shangzhe Wu', 'Andrea Vedaldi', 'Jiajun Wu'] | ['cs.CV'] | Learning 3D models of all animals on the Earth requires massively scaling up
existing solutions. With this ultimate goal in mind, we develop 3D-Fauna, an
approach that learns a pan-category deformable 3D animal model for more than
100 animal species jointly. One crucial bottleneck of modeling animals is the
limited ava... | 2024-01-04T18:32:48Z | The first two authors contributed equally to this work. The last
three authors contributed equally. Project page:
https://kyleleey.github.io/3DFauna/ | null | null | null | null | null | null | null | null | null |
2,401.02415 | LLaMA Pro: Progressive LLaMA with Block Expansion | ['Chengyue Wu', 'Yukang Gan', 'Yixiao Ge', 'Zeyu Lu', 'Jiahao Wang', 'Ye Feng', 'Ying Shan', 'Ping Luo'] | ['cs.CL'] | Humans generally acquire new skills without compromising the old; however,
the opposite holds for Large Language Models (LLMs), e.g., from LLaMA to
CodeLLaMA. To this end, we propose a new post-pretraining method for LLMs with
an expansion of Transformer blocks. We tune the expanded blocks using only new
corpus, effici... | 2024-01-04T18:59:12Z | Accepted by ACL 2024, Main Conference | null | null | LLaMA Pro: Progressive LLaMA with Block Expansion | ['Chengyue Wu', 'Yukang Gan', 'Yixiao Ge', 'Zeyu Lu', 'Jiahao Wang', 'Ye Feng', 'Ping Luo', 'Ying Shan'] | 2,024 | Annual Meeting of the Association for Computational Linguistics | 72 | 69 | ['Computer Science'] |
2,401.02584 | Towards Weakly Supervised Text-to-Audio Grounding | ['Xuenan Xu', 'Ziyang Ma', 'Mengyue Wu', 'Kai Yu'] | ['cs.SD', 'eess.AS'] | Text-to-audio grounding (TAG) task aims to predict the onsets and offsets of
sound events described by natural language. This task can facilitate
applications such as multimodal information retrieval. This paper focuses on
weakly-supervised text-to-audio grounding (WSTAG), where frame-level
annotations of sound events ... | 2024-01-05T00:27:32Z | null | null | null | null | null | null | null | null | null | null |
2,401.02611 | MOODv2: Masked Image Modeling for Out-of-Distribution Detection | ['Jingyao Li', 'Pengguang Chen', 'Shaozuo Yu', 'Shu Liu', 'Jiaya Jia'] | ['cs.CV'] | The crux of effective out-of-distribution (OOD) detection lies in acquiring a
robust in-distribution (ID) representation, distinct from OOD samples. While
previous methods predominantly leaned on recognition-based techniques for this
purpose, they often resulted in shortcut learning, lacking comprehensive
representatio... | 2024-01-05T02:57:58Z | null | null | null | MOODv2: Masked Image Modeling for Out-of-Distribution Detection | ['Jingyao Li', 'Pengguang Chen', 'Shaozuo Yu', 'Shu Liu', 'Jiaya Jia'] | 2,024 | IEEE Transactions on Pattern Analysis and Machine Intelligence | 8 | 47 | ['Computer Science', 'Medicine'] |
2,401.02677 | Progressive Knowledge Distillation Of Stable Diffusion XL Using Layer
Level Loss | ['Yatharth Gupta', 'Vishnu V. Jaddipal', 'Harish Prabhala', 'Sayak Paul', 'Patrick Von Platen'] | ['cs.CV', 'cs.AI'] | Stable Diffusion XL (SDXL) has become the best open source text-to-image
model (T2I) for its versatility and top-notch image quality. Efficiently
addressing the computational demands of SDXL models is crucial for wider reach
and applicability. In this work, we introduce two scaled-down variants, Segmind
Stable Diffusio... | 2024-01-05T07:21:46Z | null | null | null | Progressive Knowledge Distillation Of Stable Diffusion XL Using Layer Level Loss | ['Yatharth Gupta', 'Vishnu V. Jaddipal', 'Harish Prabhala', 'Sayak Paul', 'Patrick von Platen'] | 2,024 | arXiv.org | 39 | 14 | ['Computer Science'] |
2,401.02731 | Parameter-Efficient Sparsity Crafting from Dense to Mixture-of-Experts
for Instruction Tuning on General Tasks | ['Haoyuan Wu', 'Haisheng Zheng', 'Zhuolun He', 'Bei Yu'] | ['cs.AI'] | Large language models (LLMs) have demonstrated considerable proficiency in
general natural language processing (NLP) tasks. Instruction tuning, a
successful paradigm, enhances the ability of LLMs to follow natural language
instructions and exhibit robust generalization across general tasks. However,
these models often ... | 2024-01-05T09:58:09Z | null | null | null | Parameter-Efficient Sparsity Crafting from Dense to Mixture-of-Experts for Instruction Tuning on General Tasks | ['Haoyuan Wu', 'Haisheng Zheng', 'Bei Yu'] | 2,024 | Conference on Empirical Methods in Natural Language Processing | 16 | 72 | ['Computer Science'] |
2,401.02797 | PeFoMed: Parameter Efficient Fine-tuning of Multimodal Large Language
Models for Medical Imaging | ['Jinlong He', 'Pengfei Li', 'Gang Liu', 'Genrong He', 'Zhaolin Chen', 'Shenjun Zhong'] | ['cs.CL', 'cs.AI'] | Multimodal large language models (MLLMs) represent an evolutionary expansion
in the capabilities of traditional large language models, enabling them to
tackle challenges that surpass the scope of purely text-based applications. It
leverages the knowledge previously encoded within these language models,
thereby enhancin... | 2024-01-05T13:22:12Z | 12 pages, 8 figures, 12 tables | null | null | PeFoMed: Parameter Efficient Fine-tuning of Multimodal Large Language Models for Medical Imaging | ['Jinlong He', 'Pengfei Li', 'Gang Liu', 'Zixu Zhao', 'Shenjun Zhong'] | 2,024 | null | 3 | 54 | ['Computer Science'] |
2,401.02909 | Introducing Bode: A Fine-Tuned Large Language Model for Portuguese
Prompt-Based Task | ['Gabriel Lino Garcia', 'Pedro Henrique Paiola', 'Luis Henrique Morelli', 'Giovani Candido', 'Arnaldo Cândido Júnior', 'Danilo Samuel Jodas', 'Luis C. S. Afonso', 'Ivan Rizzo Guilherme', 'Bruno Elias Penteado', 'João Paulo Papa'] | ['cs.CL'] | Large Language Models (LLMs) are increasingly bringing advances to Natural
Language Processing. However, low-resource languages, those lacking extensive
prominence in datasets for various NLP tasks, or where existing datasets are
not as substantial, such as Portuguese, already obtain several benefits from
LLMs, but not... | 2024-01-05T17:15:01Z | 10 pages, 3 figures | null | null | null | null | null | null | null | null | null |
2,401.02955 | Open-Vocabulary SAM: Segment and Recognize Twenty-thousand Classes
Interactively | ['Haobo Yuan', 'Xiangtai Li', 'Chong Zhou', 'Yining Li', 'Kai Chen', 'Chen Change Loy'] | ['cs.CV'] | The CLIP and Segment Anything Model (SAM) are remarkable vision foundation
models (VFMs). SAM excels in segmentation tasks across diverse domains, whereas
CLIP is renowned for its zero-shot recognition capabilities. This paper
presents an in-depth exploration of integrating these two models into a unified
framework. Sp... | 2024-01-05T18:59:22Z | Accepted by ECCV 2024; Project page:
https://www.mmlab-ntu.com/project/ovsam; Code:
https://github.com/HarborYuan/ovsam | null | 10.1007/978-3-031-72775-7_24 | Open-Vocabulary SAM: Segment and Recognize Twenty-thousand Classes Interactively | ['Haobo Yuan', 'Xiangtai Li', 'Chong Zhou', 'Yining Li', 'Kai Chen', 'Chen Change Loy'] | 2,024 | European Conference on Computer Vision | 51 | 93 | ['Computer Science'] |
2,401.03003 | AST-T5: Structure-Aware Pretraining for Code Generation and
Understanding | ['Linyuan Gong', 'Mostafa Elhoushi', 'Alvin Cheung'] | ['cs.SE', 'cs.CL', 'cs.LG'] | Large language models (LLMs) have made significant advancements in
code-related tasks, yet many LLMs treat code as simple sequences, neglecting
its structured nature. We introduce AST-T5, a novel pretraining paradigm that
leverages the Abstract Syntax Tree (AST) for enhanced code generation,
transpilation, and understa... | 2024-01-05T06:51:08Z | 15 pages; ICML 2024: https://icml.cc/virtual/2024/poster/33601 | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.