date stringdate 2023-05-04 00:00:00 2025-08-27 00:00:00 | arxiv_id stringlengths 10 10 | votes int32 0 110M | title stringlengths 8 206 | abstract stringlengths 165 1.92k | url stringlengths 40 40 |
|---|---|---|---|---|---|
2023-05-04 | 2305.03048 | 9 | Personalize Segment Anything Model with One Shot | Driven by large-data pre-training, Segment Anything Model (SAM) has been
demonstrated as a powerful and promptable framework, revolutionizing the
segmentation models. Despite the generality, customizing SAM for specific
visual concepts without man-powered prompting is under explored, e.g.,
automatically segmenting your... | https://huggingface.co/papers/2305.03048 |
2023-05-04 | 2305.02483 | 3 | ChatGPT-steered Editing Instructor for Customization of Abstractive
Summarization | Tailoring outputs from large language models, like ChatGPT, to implicit user
preferences remains a challenge despite their impressive generative
capabilities. In this paper, we propose a tri-agent generation pipeline
comprising a generator, an instructor, and an editor to enhance output
personalization. The generator p... | https://huggingface.co/papers/2305.02483 |
2023-05-04 | 2305.02463 | 3 | Shap-E: Generating Conditional 3D Implicit Functions | We present Shap-E, a conditional generative model for 3D assets. Unlike
recent work on 3D generative models which produce a single output
representation, Shap-E directly generates the parameters of implicit functions
that can be rendered as both textured meshes and neural radiance fields. We
train Shap-E in two stages:... | https://huggingface.co/papers/2305.02463 |
2023-05-04 | 2305.03047 | 1 | Principle-Driven Self-Alignment of Language Models from Scratch with
Minimal Human Supervision | Recent AI-assistant agents, such as ChatGPT, predominantly rely on supervised
fine-tuning (SFT) with human annotations and reinforcement learning from human
feedback (RLHF) to align the output of large language models (LLMs) with human
intentions, ensuring they are helpful, ethical, and reliable. However, this
dependen... | https://huggingface.co/papers/2305.03047 |
2023-05-05 | 2305.02549 | 6 | FormNetV2: Multimodal Graph Contrastive Learning for Form Document
Information Extraction | The recent advent of self-supervised pre-training techniques has led to a
surge in the use of multimodal learning in form document understanding.
However, existing approaches that extend the mask language modeling to other
modalities require careful multi-task tuning, complex reconstruction target
designs, or additiona... | https://huggingface.co/papers/2305.02549 |
2023-05-05 | 2305.03043 | 5 | Single-Shot Implicit Morphable Faces with Consistent Texture
Parameterization | There is a growing demand for the accessible creation of high-quality 3D
avatars that are animatable and customizable. Although 3D morphable models
provide intuitive control for editing and animation, and robustness for
single-view face reconstruction, they cannot easily capture geometric and
appearance details. Method... | https://huggingface.co/papers/2305.03043 |
2023-05-05 | 2305.03049 | 3 | NeuralEditor: Editing Neural Radiance Fields via Manipulating Point
Clouds | This paper proposes NeuralEditor that enables neural radiance fields (NeRFs)
natively editable for general shape editing tasks. Despite their impressive
results on novel-view synthesis, it remains a fundamental challenge for NeRFs
to edit the shape of the scene. Our key insight is to exploit the explicit
point cloud re... | https://huggingface.co/papers/2305.03049 |
2023-05-05 | 2305.02665 | 3 | Learning Language-Specific Layers for Multilingual Machine Translation | Multilingual Machine Translation promises to improve translation quality
between non-English languages. This is advantageous for several reasons, namely
lower latency (no need to translate twice), and reduced error cascades (e.g.,
avoiding losing gender and formality information when translating through
English). On th... | https://huggingface.co/papers/2305.02665 |
2023-05-05 | 2305.02499 | 3 | AutoML-GPT: Automatic Machine Learning with GPT | AI tasks encompass a wide range of domains and fields. While numerous AI
models have been designed for specific tasks and applications, they often
require considerable human efforts in finding the right model architecture,
optimization algorithm, and hyperparameters. Recent advances in large language
models (LLMs) like... | https://huggingface.co/papers/2305.02499 |
2023-05-05 | 2305.02783 | 2 | Automated Code generation for Information Technology Tasks in YAML
through Large Language Models | The recent improvement in code generation capabilities due to the use of
large language models has mainly benefited general purpose programming
languages. Domain specific languages, such as the ones used for IT Automation,
have received far less attention, despite involving many active developers and
being an essential... | https://huggingface.co/papers/2305.02783 |
2023-05-05 | 2305.03052 | 1 | Tracking through Containers and Occluders in the Wild | Tracking objects with persistence in cluttered and dynamic environments
remains a difficult challenge for computer vision systems. In this paper, we
introduce $\textbf{TCOW}$, a new benchmark and model for visual tracking
through heavy occlusion and containment. We set up a task where the goal is to,
given a video sequ... | https://huggingface.co/papers/2305.03052 |
2023-05-05 | 2305.03040 | 1 | TUVF: Learning Generalizable Texture UV Radiance Fields | Textures are a vital aspect of creating visually appealing and realistic 3D
models. In this paper, we study the problem of generating high-fidelity texture
given shapes of 3D assets, which has been relatively less explored compared
with generic 3D shape modeling. Our goal is to facilitate a controllable
texture generat... | https://huggingface.co/papers/2305.03040 |
2023-05-05 | 2305.03027 | 1 | NeRSemble: Multi-view Radiance Field Reconstruction of Human Heads | We focus on reconstructing high-fidelity radiance fields of human heads,
capturing their animations over time, and synthesizing re-renderings from novel
viewpoints at arbitrary time steps. To this end, we propose a new multi-view
capture setup composed of 16 calibrated machine vision cameras that record
time-synchroniz... | https://huggingface.co/papers/2305.03027 |
2023-05-05 | 2305.02968 | 1 | Masked Trajectory Models for Prediction, Representation, and Control | We introduce Masked Trajectory Models (MTM) as a generic abstraction for
sequential decision making. MTM takes a trajectory, such as a state-action
sequence, and aims to reconstruct the trajectory conditioned on random subsets
of the same trajectory. By training with a highly randomized masking pattern,
MTM learns vers... | https://huggingface.co/papers/2305.02968 |
2023-05-05 | 2305.02790 | 1 | BranchNorm: Robustly Scaling Extremely Deep Transformers | Recently, DeepNorm scales Transformers into extremely deep (i.e., 1000
layers) and reveals the promising potential of deep scaling. To stabilize the
training of deep models, DeepNorm (Wang et al., 2022) attempts to constrain the
model update to a constant value. Although applying such a constraint can
benefit the early... | https://huggingface.co/papers/2305.02790 |
2023-05-05 | 2305.02678 | 1 | Real-Time Neural Appearance Models | We present a complete system for real-time rendering of scenes with complex
appearance previously reserved for offline use. This is achieved with a
combination of algorithmic and system level innovations.
Our appearance model utilizes learned hierarchical textures that are
interpreted using neural decoders, which pro... | https://huggingface.co/papers/2305.02678 |
2023-05-05 | 2305.02440 | 1 | Cheaply Evaluating Inference Efficiency Metrics for Autoregressive
Transformer APIs | Large language models (LLMs) power many state-of-the-art systems in natural
language processing. However, these models are extremely computationally
expensive, even at inference time, raising the natural question: when is the
extra cost of deploying a larger model worth the anticipated boost in
capabilities? Better und... | https://huggingface.co/papers/2305.02440 |
2023-05-05 | 2305.02412 | 1 | Plan, Eliminate, and Track -- Language Models are Good Teachers for
Embodied Agents | Pre-trained large language models (LLMs) capture procedural knowledge about
the world. Recent work has leveraged LLM's ability to generate abstract plans
to simplify challenging control tasks, either by action scoring, or action
modeling (fine-tuning). However, the transformer architecture inherits several
constraints ... | https://huggingface.co/papers/2305.02412 |
2023-05-07 | 2305.03111 | 10 | Can LLM Already Serve as A Database Interface? A BIg Bench for
Large-Scale Database Grounded Text-to-SQLs | Text-to-SQL parsing, which aims at converting natural language instructions
into executable SQLs, has gained increasing attention in recent years. In
particular, Codex and ChatGPT have shown impressive results in this task.
However, most of the prevalent benchmarks, i.e., Spider, and WikiSQL, focus on
database schema w... | https://huggingface.co/papers/2305.03111 |
2023-05-07 | 2305.03726 | 6 | Otter: A Multi-Modal Model with In-Context Instruction Tuning | Large language models (LLMs) have demonstrated significant universal
capabilities as few/zero-shot learners in various tasks due to their
pre-training on vast amounts of text data, as exemplified by GPT-3, which
boosted to InstrctGPT and ChatGPT, effectively following natural language
instructions to accomplish real-wo... | https://huggingface.co/papers/2305.03726 |
2023-05-07 | 2305.03695 | 4 | Vera: A General-Purpose Plausibility Estimation Model for Commonsense
Statements | Despite the much discussed capabilities of today's language models, they are
still prone to silly and unexpected commonsense failures. We consider a
retrospective verification approach that reflects on the correctness of LM
outputs, and introduce Vera, a general-purpose model that estimates the
plausibility of declarat... | https://huggingface.co/papers/2305.03695 |
2023-05-07 | 2305.03210 | 1 | AttentionViz: A Global View of Transformer Attention | Transformer models are revolutionizing machine learning, but their inner
workings remain mysterious. In this work, we present a new visualization
technique designed to help researchers understand the self-attention mechanism
in transformers that allows these models to learn rich, contextual
relationships between elemen... | https://huggingface.co/papers/2305.03210 |
2023-05-07 | 2305.03509 | 1 | Diffusion Explainer: Visual Explanation for Text-to-image Stable
Diffusion | Diffusion-based generative models' impressive ability to create convincing
images has captured global attention. However, their complex internal
structures and operations often make them difficult for non-experts to
understand. We present Diffusion Explainer, the first interactive visualization
tool that explains how S... | https://huggingface.co/papers/2305.03509 |
2023-05-07 | 2305.03514 | 1 | Can Large Language Models Transform Computational Social Science? | Large Language Models (LLMs) like ChatGPT are capable of successfully
performing many language processing tasks zero-shot (without the need for
training data). If this capacity also applies to the coding of social phenomena
like persuasiveness and political ideology, then LLMs could effectively
transform Computational ... | https://huggingface.co/papers/2305.03514 |
2023-05-07 | 2305.03719 | 0 | Governance of the AI, by the AI, and for the AI | Over the past half century, there have been several false dawns during which
the "arrival" of world-changing artificial intelligence (AI) has been heralded.
Tempting fate, the authors believe the age of AI has, indeed, finally arrived.
Powerful image generators, such as DALL-E2 and Midjourney have suddenly allowed
anyo... | https://huggingface.co/papers/2305.03719 |
2023-05-08 | 2305.04745 | 3 | Controllable Light Diffusion for Portraits | We introduce light diffusion, a novel method to improve lighting in
portraits, softening harsh shadows and specular highlights while preserving
overall scene illumination. Inspired by professional photographers' diffusers
and scrims, our method softens lighting given only a single portrait photo.
Previous portrait reli... | https://huggingface.co/papers/2305.04745 |
2023-05-08 | 2305.04461 | 2 | Locally Attentional SDF Diffusion for Controllable 3D Shape Generation | Although the recent rapid evolution of 3D generative neural networks greatly
improves 3D shape generation, it is still not convenient for ordinary users to
create 3D shapes and control the local geometry of generated shapes. To address
these challenges, we propose a diffusion-based 3D generation framework --
locally at... | https://huggingface.co/papers/2305.04461 |
2023-05-08 | 2305.04160 | 2 | X-LLM: Bootstrapping Advanced Large Language Models by Treating
Multi-Modalities as Foreign Languages | Large language models (LLMs) have demonstrated remarkable language abilities.
GPT-4, based on advanced LLMs, exhibits extraordinary multimodal capabilities
beyond previous visual language models. We attribute this to the use of more
advanced LLMs compared with previous multimodal models. Unfortunately, the
model archit... | https://huggingface.co/papers/2305.04160 |
2023-05-08 | 2305.03689 | 2 | COLA: How to adapt vision-language models to Compose Objects Localized
with Attributes? | Compositional reasoning is a hallmark of human visual intelligence; yet
despite the size of large vision-language models, they struggle to represent
simple compositions by combining objects with their attributes. To measure this
lack of compositional capability, we design Cola, a text-to-image retrieval
benchmark to Co... | https://huggingface.co/papers/2305.03689 |
2023-05-08 | 2305.04391 | 1 | A Variational Perspective on Solving Inverse Problems with Diffusion
Models | Diffusion models have emerged as a key pillar of foundation models in visual
domains. One of their critical applications is to universally solve different
downstream inverse tasks via a single diffusion prior without re-training for
each task. Most inverse tasks can be formulated as inferring a posterior
distribution o... | https://huggingface.co/papers/2305.04391 |
2023-05-08 | 2305.03713 | 1 | Avatar Fingerprinting for Authorized Use of Synthetic Talking-Head
Videos | Modern generators render talking-head videos with impressive levels of
photorealism, ushering in new user experiences such as videoconferencing under
constrained bandwidth budgets. Their safe adoption, however, requires a
mechanism to verify if the rendered video is trustworthy. For instance, for
videoconferencing we m... | https://huggingface.co/papers/2305.03713 |
2023-05-08 | 2305.03668 | 1 | A Suite of Generative Tasks for Multi-Level Multimodal Webpage
Understanding | Webpages have been a rich, scalable resource for vision-language and language
only tasks. Yet only pieces of webpages are kept in existing datasets:
image-caption pairs, long text articles, or raw HTML, never all in one place.
Webpage tasks have resultingly received little attention and structured
image-text data left ... | https://huggingface.co/papers/2305.03668 |
2023-05-08 | 2305.03286 | 1 | Composite Motion Learning with Task Control | We present a deep learning method for composite and task-driven motion
control for physically simulated characters. In contrast to existing
data-driven approaches using reinforcement learning that imitate full-body
motions, we learn decoupled motions for specific body parts from multiple
reference motions simultaneousl... | https://huggingface.co/papers/2305.03286 |
2023-05-09 | 2305.05176 | 6 | FrugalGPT: How to Use Large Language Models While Reducing Cost and
Improving Performance | There is a rapidly growing number of large language models (LLMs) that users
can query for a fee. We review the cost associated with querying popular LLM
APIs, e.g. GPT-4, ChatGPT, J1-Jumbo, and find that these models have
heterogeneous pricing structures, with fees that can differ by two orders of
magnitude. In partic... | https://huggingface.co/papers/2305.05176 |
2023-05-09 | 2305.05644 | 5 | Towards Building the Federated GPT: Federated Instruction Tuning | While ``instruction-tuned" generative large language models (LLMs) have
demonstrated an impressive ability to generalize to new tasks, the training
phases heavily rely on large amounts of diverse and high-quality instruction
data (such as ChatGPT and GPT-4). Unfortunately, acquiring high-quality data,
especially when i... | https://huggingface.co/papers/2305.05644 |
2023-05-09 | 2305.05662 | 4 | InternChat: Solving Vision-Centric Tasks by Interacting with Chatbots
Beyond Language | We present an interactive visual framework named InternChat, or iChat for
short. The framework integrates chatbots that have planning and reasoning
capabilities, such as ChatGPT, with non-verbal instructions like pointing
movements that enable users to directly manipulate images or videos on the
screen. Pointing (inclu... | https://huggingface.co/papers/2305.05662 |
2023-05-09 | 2305.04091 | 3 | Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning
by Large Language Models | Large language models (LLMs) have recently been shown to deliver impressive
performance in various NLP tasks. To tackle multi-step reasoning tasks,
few-shot chain-of-thought (CoT) prompting includes a few manually crafted
step-by-step reasoning demonstrations which enable LLMs to explicitly generate
reasoning steps and... | https://huggingface.co/papers/2305.04091 |
2023-05-09 | 2305.05189 | 2 | SUR-adapter: Enhancing Text-to-Image Pre-trained Diffusion Models with
Large Language Models | Diffusion models, which have emerged to become popular text-to-image
generation models, can produce high-quality and content-rich images guided by
textual prompts. However, there are limitations to semantic understanding and
commonsense reasoning in existing models when the input prompts are concise
narrative, resultin... | https://huggingface.co/papers/2305.05189 |
2023-05-09 | 2305.03937 | 2 | Residual Prompt Tuning: Improving Prompt Tuning with Residual
Reparameterization | Prompt tuning is one of the successful approaches for parameter-efficient
tuning of pre-trained language models. Despite being arguably the most
parameter-efficient (tuned soft prompts constitute <0.1% of total parameters),
it typically performs worse than other efficient tuning methods and is quite
sensitive to hyper-... | https://huggingface.co/papers/2305.03937 |
2023-05-09 | 2305.04790 | 1 | MultiModal-GPT: A Vision and Language Model for Dialogue with Humans | We present a vision and language model named MultiModal-GPT to conduct
multi-round dialogue with humans. MultiModal-GPT can follow various
instructions from humans, such as generating a detailed caption, counting the
number of interested objects, and answering general questions from users.
MultiModal-GPT is parameter-e... | https://huggingface.co/papers/2305.04790 |
2023-05-09 | 2305.04789 | 1 | AvatarReX: Real-time Expressive Full-body Avatars | We present AvatarReX, a new method for learning NeRF-based full-body avatars
from video data. The learnt avatar not only provides expressive control of the
body, hands and the face together, but also supports real-time animation and
rendering. To this end, we propose a compositional avatar representation, where
the bod... | https://huggingface.co/papers/2305.04789 |
2023-05-09 | 2305.04388 | 1 | Language Models Don't Always Say What They Think: Unfaithful
Explanations in Chain-of-Thought Prompting | Large Language Models (LLMs) can achieve strong performance on many tasks by
producing step-by-step reasoning before giving a final output, often referred
to as chain-of-thought reasoning (CoT). It is tempting to interpret these CoT
explanations as the LLM's process for solving a task. However, we find that CoT
explana... | https://huggingface.co/papers/2305.04388 |
2023-05-09 | 2305.04268 | 1 | Multi-Space Neural Radiance Fields | Existing Neural Radiance Fields (NeRF) methods suffer from the existence of
reflective objects, often resulting in blurry or distorted rendering. Instead
of calculating a single radiance field, we propose a multi-space neural
radiance field (MS-NeRF) that represents the scene using a group of feature
fields in parallel... | https://huggingface.co/papers/2305.04268 |
2023-05-09 | 2305.04241 | 1 | Vcc: Scaling Transformers to 128K Tokens or More by Prioritizing
Important Tokens | Transformer models are foundational to natural language processing (NLP) and
computer vision. Despite various recent works devoted to reducing the quadratic
cost of such models (as a function of the sequence length n), dealing with
ultra long sequences efficiently (e.g., with more than 16K tokens) remains
challenging. ... | https://huggingface.co/papers/2305.04241 |
2023-05-09 | 2305.03981 | 1 | Pre-training Language Model as a Multi-perspective Course Learner | ELECTRA, the generator-discriminator pre-training framework, has achieved
impressive semantic construction capability among various downstream tasks.
Despite the convincing performance, ELECTRA still faces the challenges of
monotonous training and deficient interaction. Generator with only masked
language modeling (MLM... | https://huggingface.co/papers/2305.03981 |
2023-05-10 | 2305.05065 | 7 | Recommender Systems with Generative Retrieval | Modern recommender systems leverage large-scale retrieval models consisting
of two stages: training a dual-encoder model to embed queries and candidates in
the same space, followed by an Approximate Nearest Neighbor (ANN) search to
select top candidates given a query's embedding. In this paper, we propose a
new single-... | https://huggingface.co/papers/2305.05065 |
2023-05-10 | 2304.09355 | 5 | To Compress or Not to Compress- Self-Supervised Learning and Information
Theory: A Review | Deep neural networks have demonstrated remarkable performance in supervised
learning tasks but require large amounts of labeled data. Self-supervised
learning offers an alternative paradigm, enabling the model to learn from data
without explicit labels. Information theory has been instrumental in
understanding and opti... | https://huggingface.co/papers/2304.09355 |
2023-05-10 | 2305.05862 | 4 | Are ChatGPT and GPT-4 General-Purpose Solvers for Financial Text
Analytics? An Examination on Several Typical Tasks | The most recent large language models such as ChatGPT and GPT-4 have garnered
significant attention, as they are capable of generating high-quality responses
to human input. Despite the extensive testing of ChatGPT and GPT-4 on generic
text corpora, showcasing their impressive capabilities, a study focusing on
financia... | https://huggingface.co/papers/2305.05862 |
2023-05-10 | 2305.05591 | 3 | AudioSlots: A slot-centric generative model for audio separation | In a range of recent works, object-centric architectures have been shown to
be suitable for unsupervised scene decomposition in the vision domain. Inspired
by these methods we present AudioSlots, a slot-centric generative model for
blind source separation in the audio domain. AudioSlots is built using
permutation-equiv... | https://huggingface.co/papers/2305.05591 |
2023-05-10 | 2305.06077 | 2 | Relightify: Relightable 3D Faces from a Single Image via Diffusion
Models | Following the remarkable success of diffusion models on image generation,
recent works have also demonstrated their impressive ability to address a
number of inverse problems in an unsupervised way, by properly constraining the
sampling process based on a conditioning input. Motivated by this, in this
paper, we present... | https://huggingface.co/papers/2305.06077 |
2023-05-10 | 2305.05845 | 2 | Sketching the Future (STF): Applying Conditional Control Techniques to
Text-to-Video Models | The proliferation of video content demands efficient and flexible neural
network based approaches for generating new video content. In this paper, we
propose a novel approach that combines zero-shot text-to-video generation with
ControlNet to improve the output of these models. Our method takes multiple
sketched frames... | https://huggingface.co/papers/2305.05845 |
2023-05-10 | 2305.05658 | 2 | TidyBot: Personalized Robot Assistance with Large Language Models | For a robot to personalize physical assistance effectively, it must learn
user preferences that can be generally reapplied to future scenarios. In this
work, we investigate personalization of household cleanup with robots that can
tidy up rooms by picking up objects and putting them away. A key challenge is
determining... | https://huggingface.co/papers/2305.05658 |
2023-05-10 | 2305.05364 | 2 | Large Language Model Programs | In recent years, large pre-trained language models (LLMs) have demonstrated
the ability to follow instructions and perform novel tasks from a few examples.
The possibility to parameterise an LLM through such in-context examples widens
their capability at a much lower cost than finetuning. We extend this line of
reasoni... | https://huggingface.co/papers/2305.05364 |
2023-05-10 | 2305.04966 | 2 | NerfAcc: Efficient Sampling Accelerates NeRFs | Optimizing and rendering Neural Radiance Fields is computationally expensive
due to the vast number of samples required by volume rendering. Recent works
have included alternative sampling approaches to help accelerate their methods,
however, they are often not the focus of the work. In this paper, we
investigate and c... | https://huggingface.co/papers/2305.04966 |
2023-05-10 | 2305.05383 | 2 | Code Execution with Pre-trained Language Models | Code execution is a fundamental aspect of programming language semantics that
reflects the exact behavior of the code. However, most pre-trained models for
code intelligence ignore the execution trace and only rely on source code and
syntactic structures. In this paper, we investigate how well pre-trained models
can un... | https://huggingface.co/papers/2305.05383 |
2023-05-10 | 2305.05432 | 1 | WikiWeb2M: A Page-Level Multimodal Wikipedia Dataset | Webpages have been a rich resource for language and vision-language tasks.
Yet only pieces of webpages are kept: image-caption pairs, long text articles,
or raw HTML, never all in one place. Webpage tasks have resultingly received
little attention and structured image-text data underused. To study multimodal
webpage un... | https://huggingface.co/papers/2305.05432 |
2023-05-11 | 2305.06161 | 31 | StarCoder: may the source be with you! | The BigCode community, an open-scientific collaboration working on the
responsible development of Large Language Models for Code (Code LLMs),
introduces StarCoder and StarCoderBase: 15.5B parameter models with 8K context
length, infilling capabilities and fast large-batch inference enabled by
multi-query attention. Sta... | https://huggingface.co/papers/2305.06161 |
2023-05-11 | 2305.06355 | 3 | VideoChat: Chat-Centric Video Understanding | In this study, we initiate an exploration into video understanding by
introducing VideoChat, an end-to-end chat-centric video understanding system.
It integrates video foundation models and large language models via a learnable
neural interface, excelling in spatiotemporal reasoning, event localization,
and causal rela... | https://huggingface.co/papers/2305.06355 |
2023-05-11 | 2305.06131 | 2 | Generative AI meets 3D: A Survey on Text-to-3D in AIGC Era | Generative AI (AIGC, a.k.a. AI generated content) has made remarkable
progress in the past few years, among which text-guided content generation is
the most practical one since it enables the interaction between human
instruction and AIGC. Due to the development in text-to-image as well 3D
modeling technologies (like N... | https://huggingface.co/papers/2305.06131 |
2023-05-11 | 2305.06356 | 1 | HumanRF: High-Fidelity Neural Radiance Fields for Humans in Motion | Representing human performance at high-fidelity is an essential building
block in diverse applications, such as film production, computer games or
videoconferencing. To close the gap to production-level quality, we introduce
HumanRF, a 4D dynamic neural scene representation that captures full-body
appearance in motion ... | https://huggingface.co/papers/2305.06356 |
2023-05-11 | 2305.06351 | 1 | Reconstructing Animatable Categories from Videos | Building animatable 3D models is challenging due to the need for 3D scans,
laborious registration, and manual rigging, which are difficult to scale to
arbitrary categories. Recently, differentiable rendering provides a pathway to
obtain high-quality 3D models from monocular videos, but these are limited to
rigid catego... | https://huggingface.co/papers/2305.06351 |
2023-05-11 | 2305.06324 | 1 | Alternating Gradient Descent and Mixture-of-Experts for Integrated
Multimodal Perception | We present Integrated Multimodal Perception (IMP), a simple and scalable
multimodal multi-task training and modeling approach. IMP integrates multimodal
inputs including image, video, text, and audio into a single Transformer
encoder with minimal modality-specific components. IMP makes use of a novel
design that combin... | https://huggingface.co/papers/2305.06324 |
2023-05-11 | 2305.05973 | 1 | Privacy-Preserving Recommender Systems with Synthetic Query Generation
using Differentially Private Large Language Models | We propose a novel approach for developing privacy-preserving large-scale
recommender systems using differentially private (DP) large language models
(LLMs) which overcomes certain challenges and limitations in DP training these
complex systems. Our method is particularly well suited for the emerging area
of LLM-based ... | https://huggingface.co/papers/2305.05973 |
2023-05-11 | 2305.05706 | 1 | DexArt: Benchmarking Generalizable Dexterous Manipulation with
Articulated Objects | To enable general-purpose robots, we will require the robot to operate daily
articulated objects as humans do. Current robot manipulation has heavily relied
on using a parallel gripper, which restricts the robot to a limited set of
objects. On the other hand, operating with a multi-finger robot hand will allow
better a... | https://huggingface.co/papers/2305.05706 |
2023-05-11 | 2305.06218 | 1 | Multi-Task End-to-End Training Improves Conversational Recommendation | In this paper, we analyze the performance of a multitask end-to-end
transformer model on the task of conversational recommendations, which aim to
provide recommendations based on a user's explicit preferences expressed in
dialogue. While previous works in this area adopt complex multi-component
approaches where the dia... | https://huggingface.co/papers/2305.06218 |
2023-05-12 | 2305.06908 | 6 | CoMoSpeech: One-Step Speech and Singing Voice Synthesis via Consistency
Model | Denoising diffusion probabilistic models (DDPMs) have shown promising
performance for speech synthesis. However, a large number of iterative steps
are required to achieve high sample quality, which restricts the inference
speed. Maintaining sample quality while increasing sampling speed has become a
challenging task. I... | https://huggingface.co/papers/2305.06908 |
2023-05-12 | 2305.07011 | 5 | Region-Aware Pretraining for Open-Vocabulary Object Detection with
Vision Transformers | We present Region-aware Open-vocabulary Vision Transformers (RO-ViT) - a
contrastive image-text pretraining recipe to bridge the gap between image-level
pretraining and open-vocabulary object detection. At the pretraining phase, we
propose to randomly crop and resize regions of positional embeddings instead of
using th... | https://huggingface.co/papers/2305.07011 |
2023-05-12 | 2305.06500 | 5 | InstructBLIP: Towards General-purpose Vision-Language Models with
Instruction Tuning | General-purpose language models that can solve various language-domain tasks
have emerged driven by the pre-training and instruction-tuning pipeline.
However, building general-purpose vision-language models is challenging due to
the increased task discrepancy introduced by the additional visual input.
Although vision-l... | https://huggingface.co/papers/2305.06500 |
2023-05-12 | 2305.07027 | 4 | EfficientViT: Memory Efficient Vision Transformer with Cascaded Group
Attention | Vision transformers have shown great success due to their high model
capabilities. However, their remarkable performance is accompanied by heavy
computation costs, which makes them unsuitable for real-time applications. In
this paper, we propose a family of high-speed vision transformers named
EfficientViT. We find tha... | https://huggingface.co/papers/2305.07027 |
2023-05-12 | 2305.07015 | 4 | Exploiting Diffusion Prior for Real-World Image Super-Resolution | We present a novel approach to leverage prior knowledge encapsulated in
pre-trained text-to-image diffusion models for blind super-resolution (SR).
Specifically, by employing our time-aware encoder, we can achieve promising
restoration results without altering the pre-trained synthesis model, thereby
preserving the gen... | https://huggingface.co/papers/2305.07015 |
2023-05-12 | 2305.07017 | 3 | An Inverse Scaling Law for CLIP Training | CLIP, one of the pioneering foundation models that connect images and text,
has enabled many recent breakthroughs in computer vision. However, its
associated training cost is prohibitively high, imposing a significant barrier
to its widespread exploration. In this paper, we present a surprising finding
that there exist... | https://huggingface.co/papers/2305.07017 |
2023-05-12 | 2305.06575 | 2 | Chain-of-Dictionary Prompting Elicits Translation in Large Language
Models | Large language models (LLMs) have shown surprisingly good performance in
multilingual neural machine translation (MNMT) even when trained without
parallel data. Yet, despite the fact that the amount of training data is
gigantic, they still struggle with translating rare words, particularly for
low-resource languages. E... | https://huggingface.co/papers/2305.06575 |
2023-05-12 | 2305.07021 | 1 | Simple Token-Level Confidence Improves Caption Correctness | The ability to judge whether a caption correctly describes an image is a
critical part of vision-language understanding. However, state-of-the-art
models often misinterpret the correctness of fine-grained details, leading to
errors in outputs such as hallucinating objects in generated captions or poor
compositional rea... | https://huggingface.co/papers/2305.07021 |
2023-05-12 | 2305.07004 | 1 | Not All Languages Are Created Equal in LLMs: Improving Multilingual
Capability by Cross-Lingual-Thought Prompting | Large language models (LLMs) demonstrate impressive multilingual capability,
but their performance varies substantially across different languages. In this
work, we introduce a simple yet effective method, called cross-lingual-thought
prompting (XLT), to systematically improve the multilingual capability of LLMs.
Speci... | https://huggingface.co/papers/2305.07004 |
2023-05-12 | 2305.06594 | 1 | V2Meow: Meowing to the Visual Beat via Music Generation | Generating high quality music that complements the visual content of a video
is a challenging task. Most existing visual conditioned music generation
systems generate symbolic music data, such as MIDI files, instead of raw audio
waveform. Given the limited availability of symbolic music data, such methods
can only gene... | https://huggingface.co/papers/2305.06594 |
2023-05-12 | 2305.06555 | 1 | Domain Incremental Lifelong Learning in an Open World | Lifelong learning (LL) is an important ability for NLP models to learn new
tasks continuously. Architecture-based approaches are reported to be effective
implementations for LL models. However, it is non-trivial to extend previous
approaches to domain incremental LL scenarios since they either require access
to task id... | https://huggingface.co/papers/2305.06555 |
2023-05-12 | 2305.06474 | 1 | Do LLMs Understand User Preferences? Evaluating LLMs On User Rating
Prediction | Large Language Models (LLMs) have demonstrated exceptional capabilities in
generalizing to new tasks in a zero-shot or few-shot manner. However, the
extent to which LLMs can comprehend user preferences based on their previous
behavior remains an emerging and still unclear research question.
Traditionally, Collaborative... | https://huggingface.co/papers/2305.06474 |
2023-05-12 | 2305.06456 | 1 | Perpetual Humanoid Control for Real-time Simulated Avatars | We present a physics-based humanoid controller that achieves high-fidelity
motion imitation and fault-tolerant behavior in the presence of noisy input
(e.g. pose estimates from video or generated from language) and unexpected
falls. Our controller scales up to learning ten thousand motion clips without
using any extern... | https://huggingface.co/papers/2305.06456 |
2023-05-12 | 2305.06424 | 1 | Bot or Human? Detecting ChatGPT Imposters with A Single Question | Large language models like ChatGPT have recently demonstrated impressive
capabilities in natural language understanding and generation, enabling various
applications including translation, essay writing, and chit-chatting. However,
there is a concern that they can be misused for malicious purposes, such as
fraud or den... | https://huggingface.co/papers/2305.06424 |
2023-05-12 | 2305.06404 | 1 | LACoS-BLOOM: Low-rank Adaptation with Contrastive objective on 8 bits
Siamese-BLOOM | Text embeddings are useful features for several NLP applications, such as
sentence similarity, text clustering, and semantic search. In this paper, we
present a Low-rank Adaptation with a Contrastive objective on top of 8-bit
Siamese-BLOOM, a multilingual large language model optimized to produce
semantically meaningfu... | https://huggingface.co/papers/2305.06404 |
2023-05-14 | 2305.07185 | 9 | MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers | Autoregressive transformers are spectacular models for short sequences but
scale poorly to long sequences such as high-resolution images, podcasts, code,
or books. We proposed Megabyte, a multi-scale decoder architecture that enables
end-to-end differentiable modeling of sequences of over one million bytes.
Megabyte se... | https://huggingface.co/papers/2305.07185 |
2023-05-14 | 2305.07243 | 5 | Better speech synthesis through scaling | In recent years, the field of image generation has been revolutionized by the
application of autoregressive transformers and DDPMs. These approaches model
the process of image generation as a step-wise probabilistic processes and
leverage large amounts of compute and data to learn the image distribution.
This methodolo... | https://huggingface.co/papers/2305.07243 |
2023-05-14 | 2305.07490 | 1 | ArtGPT-4: Artistic Vision-Language Understanding with Adapter-enhanced
MiniGPT-4 | In recent years, large language models (LLMs) have made significant progress
in natural language processing (NLP), with models like ChatGPT and GPT-4
achieving impressive capabilities in various linguistic tasks. However,
training models on such a large scale is challenging, and finding datasets that
match the model's ... | https://huggingface.co/papers/2305.07490 |
2023-05-15 | 2305.08379 | 3 | TESS: Text-to-Text Self-Conditioned Simplex Diffusion | Diffusion models have emerged as a powerful paradigm for generation,
obtaining strong performance in various domains with continuous-valued inputs.
Despite the promises of fully non-autoregressive text generation, applying
diffusion models to natural language remains challenging due to its discrete
nature. In this work... | https://huggingface.co/papers/2305.08379 |
2023-05-15 | 2305.07447 | 3 | Universal Source Separation with Weakly Labelled Data | Universal source separation (USS) is a fundamental research task for
computational auditory scene analysis, which aims to separate mono recordings
into individual source tracks. There are three potential challenges awaiting
the solution to the audio source separation task. First, previous audio source
separation system... | https://huggingface.co/papers/2305.07447 |
2023-05-15 | 2305.08850 | 1 | Make-A-Protagonist: Generic Video Editing with An Ensemble of Experts | The text-driven image and video diffusion models have achieved unprecedented
success in generating realistic and diverse content. Recently, the editing and
variation of existing images and videos in diffusion-based generative models
have garnered significant attention. However, previous works are limited to
editing con... | https://huggingface.co/papers/2305.08850 |
2023-05-15 | 2305.07615 | 1 | What are the Desired Characteristics of Calibration Sets? Identifying
Correlates on Long Form Scientific Summarization | Summarization models often generate text that is poorly calibrated to quality
metrics because they are trained to maximize the likelihood of a single
reference (MLE). To address this, recent work has added a calibration step,
which exposes a model to its own ranked outputs to improve relevance or, in a
separate line of... | https://huggingface.co/papers/2305.07615 |
2023-05-15 | 2305.07558 | 1 | Measuring Progress in Fine-grained Vision-and-Language Understanding | While pretraining on large-scale image-text data from the Web has facilitated
rapid progress on many vision-and-language (V&L) tasks, recent work has
demonstrated that pretrained models lack "fine-grained" understanding, such as
the ability to recognise relationships, verbs, and numbers in images. This has
resulted in ... | https://huggingface.co/papers/2305.07558 |
2023-05-15 | 2305.07514 | 1 | BlendFields: Few-Shot Example-Driven Facial Modeling | Generating faithful visualizations of human faces requires capturing both
coarse and fine-level details of the face geometry and appearance. Existing
methods are either data-driven, requiring an extensive corpus of data not
publicly accessible to the research community, or fail to capture fine details
because they rely... | https://huggingface.co/papers/2305.07514 |
2023-05-15 | 2305.07378 | 1 | Surfacing Biases in Large Language Models using Contrastive Input
Decoding | Ensuring that large language models (LMs) are fair, robust and useful
requires an understanding of how different modifications to their inputs impact
the model's behaviour. In the context of open-text generation tasks, however,
such an evaluation is not trivial. For example, when introducing a model with
an input text ... | https://huggingface.co/papers/2305.07378 |
2023-05-15 | 2305.07214 | 1 | MMG-Ego4D: Multi-Modal Generalization in Egocentric Action Recognition | In this paper, we study a novel problem in egocentric action recognition,
which we term as "Multimodal Generalization" (MMG). MMG aims to study how
systems can generalize when data from certain modalities is limited or even
completely missing. We thoroughly investigate MMG in the context of standard
supervised action r... | https://huggingface.co/papers/2305.07214 |
2023-05-15 | 2305.07440 | 1 | Optimizing Memory Mapping Using Deep Reinforcement Learning | Resource scheduling and allocation is a critical component of many high
impact systems ranging from congestion control to cloud computing. Finding more
optimal solutions to these problems often has significant impact on resource
and time savings, reducing device wear-and-tear, and even potentially improving
carbon emis... | https://huggingface.co/papers/2305.07440 |
2023-05-15 | 2305.07153 | 0 | Towards best practices in AGI safety and governance: A survey of expert
opinion | A number of leading AI companies, including OpenAI, Google DeepMind, and
Anthropic, have the stated goal of building artificial general intelligence
(AGI) - AI systems that achieve or exceed human performance across a wide range
of cognitive tasks. In pursuing this goal, they may develop and deploy AI
systems that pose... | https://huggingface.co/papers/2305.07153 |
2023-05-16 | 2305.07759 | 36 | TinyStories: How Small Can Language Models Be and Still Speak Coherent
English? | Language models (LMs) are powerful tools for natural language processing, but
they often struggle to produce coherent and fluent text when they are small.
Models with around 125M parameters such as GPT-Neo (small) or GPT-2 (small) can
rarely generate coherent and consistent English text beyond a few words even
after ex... | https://huggingface.co/papers/2305.07759 |
2023-05-16 | 2305.09636 | 13 | SoundStorm: Efficient Parallel Audio Generation | We present SoundStorm, a model for efficient, non-autoregressive audio
generation. SoundStorm receives as input the semantic tokens of AudioLM, and
relies on bidirectional attention and confidence-based parallel decoding to
generate the tokens of a neural audio codec. Compared to the autoregressive
generation approach ... | https://huggingface.co/papers/2305.09636 |
2023-05-16 | 2305.08596 | 9 | DarkBERT: A Language Model for the Dark Side of the Internet | Recent research has suggested that there are clear differences in the
language used in the Dark Web compared to that of the Surface Web. As studies
on the Dark Web commonly require textual analysis of the domain, language
models specific to the Dark Web may provide valuable insights to researchers.
In this work, we int... | https://huggingface.co/papers/2305.08596 |
2023-05-16 | 2305.09617 | 5 | Towards Expert-Level Medical Question Answering with Large Language
Models | Recent artificial intelligence (AI) systems have reached milestones in "grand
challenges" ranging from Go to protein-folding. The capability to retrieve
medical knowledge, reason over it, and answer medical questions comparably to
physicians has long been viewed as one such grand challenge.
Large language models (LLM... | https://huggingface.co/papers/2305.09617 |
2023-05-16 | 2305.07922 | 5 | CodeT5+: Open Code Large Language Models for Code Understanding and
Generation | Large language models (LLMs) pretrained on vast source code have achieved
prominent progress in code intelligence. However, existing code LLMs have two
main limitations in terms of architecture and pretraining tasks. First, they
often adopt a specific architecture (encoder-only or decoder-only) or rely on a
unified enc... | https://huggingface.co/papers/2305.07922 |
2023-05-16 | 2305.08848 | 4 | Small Models are Valuable Plug-ins for Large Language Models | Large language models (LLMs) such as GPT-3 and GPT-4 are powerful but their
weights are often publicly unavailable and their immense sizes make the models
difficult to be tuned with common hardware. As a result, effectively tuning
these models with large-scale supervised data can be challenging. As an
alternative, In-C... | https://huggingface.co/papers/2305.08848 |
2023-05-16 | 2305.09662 | 3 | Make-An-Animation: Large-Scale Text-conditional 3D Human Motion
Generation | Text-guided human motion generation has drawn significant interest because of
its impactful applications spanning animation and robotics. Recently,
application of diffusion models for motion generation has enabled improvements
in the quality of generated motions. However, existing approaches are limited
by their relian... | https://huggingface.co/papers/2305.09662 |
End of preview. Expand in Data Studio
From the Frontier Research Team at Takara.ai, we present Daily Papers Popularity — a dataset tracking the popularity of Hugging Face Papers with arXiv metadata. It aggregates daily paper entries with votes, IDs, titles, abstracts (backfilled via the HF API), and URLs, enabling analysis of patterns in paper reception and engagement.
Daily Papers Popularity
- Columns:
date,arxiv_id,votes,title,abstract,url - Format: Parquet
Load
from datasets import load_dataset
ds = load_dataset("takara-ai/daily-papers-popularity")
Visualisations
Reference charts derived from the dataset. Each visual links to static assets hosted alongside the dataset.
Votes vs Title Length
Votes vs Abstract Length
Votes vs Month
Votes vs Day of Month
Distribution: Daily Paper Concentration
Votes vs Daily Paper Concentration
For research inquiries and press, please reach out to research@takara.ai
人類を変革する
- Downloads last month
- 65





