parent_paper_title
stringclasses
63 values
parent_paper_arxiv_id
stringclasses
63 values
citation_shorthand
stringlengths
2
56
raw_citation_text
stringlengths
9
63
cited_paper_title
stringlengths
5
161
cited_paper_arxiv_link
stringlengths
32
37
cited_paper_abstract
stringlengths
406
1.92k
has_metadata
bool
1 class
is_arxiv_paper
bool
2 classes
bib_paper_authors
stringlengths
2
2.44k
bib_paper_year
float64
1.97k
2.03k
bib_paper_month
stringclasses
16 values
bib_paper_url
stringlengths
20
116
bib_paper_doi
stringclasses
269 values
bib_paper_journal
stringlengths
3
148
original_title
stringlengths
5
161
search_res_title
stringlengths
4
122
search_res_url
stringlengths
22
267
search_res_content
stringlengths
19
1.92k
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
levesque2012winograd
\cite{levesque2012winograd}
The Defeat of the Winograd Schema Challenge
http://arxiv.org/abs/2201.02387v3
The Winograd Schema Challenge - a set of twin sentences involving pronoun reference disambiguation that seem to require the use of commonsense knowledge - was proposed by Hector Levesque in 2011. By 2019, a number of AI systems, based on large pre-trained transformer-based language models and fine-tuned on these kinds ...
true
true
Levesque, Hector and Davis, Ernest and Morgenstern, Leora
2,012
null
null
null
null
The Defeat of the Winograd Schema Challenge
The Defeat of the Winograd Schema Challenge
http://arxiv.org/pdf/2201.02387v3
The Winograd Schema Challenge - a set of twin sentences involving pronoun reference disambiguation that seem to require the use of commonsense knowledge - was proposed by Hector Levesque in 2011. By 2019, a number of AI systems, based on large pre-trained transformer-based language models and fine-tuned on these kinds ...
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
zhao2018gender
\cite{zhao2018gender}
Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods
http://arxiv.org/abs/1804.06876v1
We introduce a new benchmark, WinoBias, for coreference resolution focused on gender bias. Our corpus contains Winograd-schema style sentences with entities corresponding to people referred by their occupation (e.g. the nurse, the doctor, the carpenter). We demonstrate that a rule-based, a feature-rich, and a neural co...
true
true
Zhao, Jieyu and Wang, Tianlu and Yatskar, Mark and Ordonez, Vicente and Chang, Kai-Wei
2,018
null
null
null
null
Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods
Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods
http://arxiv.org/pdf/1804.06876v1
We introduce a new benchmark, WinoBias, for coreference resolution focused on gender bias. Our corpus contains Winograd-schema style sentences with entities corresponding to people referred by their occupation (e.g. the nurse, the doctor, the carpenter). We demonstrate that a rule-based, a feature-rich, and a neural co...
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
vanmassenhove2021neutral
\cite{vanmassenhove2021neutral}
NeuTral Rewriter: A Rule-Based and Neural Approach to Automatic Rewriting into Gender-Neutral Alternatives
http://arxiv.org/abs/2109.06105v1
Recent years have seen an increasing need for gender-neutral and inclusive language. Within the field of NLP, there are various mono- and bilingual use cases where gender inclusive language is appropriate, if not preferred due to ambiguity or uncertainty in terms of the gender of referents. In this work, we present a r...
true
true
Vanmassenhove, Eva and Emmery, Chris and Shterionov, Dimitar
2,021
null
null
null
null
NeuTral Rewriter: A Rule-Based and Neural Approach to Automatic Rewriting into Gender-Neutral Alternatives
NeuTral Rewriter: A Rule-Based and Neural Approach to Automatic ...
https://www.researchgate.net/publication/357122955_NeuTral_Rewriter_A_Rule-Based_and_Neural_Approach_to_Automatic_Rewriting_into_Gender_Neutral_Alternatives
Our work falls Round-trip translation (from gender-neural to gender-biased) and neural text paraphrasing German [18] Rule-based gender rewriting
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
rudinger2018gender
\cite{rudinger2018gender}
Gender Bias in Coreference Resolution
http://arxiv.org/abs/1804.09301v1
We present an empirical study of gender bias in coreference resolution systems. We first introduce a novel, Winograd schema-style set of minimal pair sentences that differ only by pronoun gender. With these "Winogender schemas," we evaluate and confirm systematic gender bias in three publicly-available coreference reso...
true
true
Rudinger, Rachel and Naradowsky, Jason and Leonard, Brian and Van Durme, Benjamin
2,018
null
null
null
null
Gender Bias in Coreference Resolution
Gender Bias in Coreference Resolution
http://arxiv.org/pdf/1804.09301v1
We present an empirical study of gender bias in coreference resolution systems. We first introduce a novel, Winograd schema-style set of minimal pair sentences that differ only by pronoun gender. With these "Winogender schemas," we evaluate and confirm systematic gender bias in three publicly-available coreference reso...
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
srivastava2023beyond
\cite{srivastava2023beyond}
Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models
http://arxiv.org/abs/2206.04615v3
Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate soc...
true
true
{BIG-bench authors}
2,023
null
null
null
TMLR
Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models
Quantifying and extrapolating the capabilities of language models
https://openreview.net/forum?id=uyTL5Bvosj
The paper introduces the Beyond the Imitation Game benchmark (BIG-bench) as a way to better understand the current and near-future capabilities and limitations
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
dhamala2021bold
\cite{dhamala2021bold}
BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation
http://arxiv.org/abs/2101.11718v1
Recent advances in deep learning techniques have enabled machines to generate cohesive open-ended text when prompted with a sequence of words as context. While these models now empower many downstream applications from conversation bots to automatic storytelling, they have been shown to generate texts that exhibit soci...
true
true
Dhamala, Jwala and Sun, Tony and Kumar, Varun and Krishna, Satyapriya and Pruksachatkun, Yada and Chang, Kai-Wei and Gupta, Rahul
2,021
null
null
null
null
BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation
Bias in Open-ended Language Generation Dataset (BOLD) - GitHub
https://github.com/amazon-science/bold
Bias in Open-ended Language Generation Dataset (BOLD) is a dataset to evaluate fairness in open-ended language generation in English language.
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
kotek2023gender
\cite{kotek2023gender}
Gender bias and stereotypes in Large Language Models
http://arxiv.org/abs/2308.14921v1
Large Language Models (LLMs) have made substantial progress in the past several months, shattering state-of-the-art benchmarks in many domains. This paper investigates LLMs' behavior with respect to gender stereotypes, a known issue for prior models. We use a simple paradigm to test the presence of gender bias, buildin...
true
true
Kotek, Hadas and Dockum, Rikker and Sun, David
2,023
null
null
null
null
Gender bias and stereotypes in Large Language Models
Gender bias and stereotypes in Large Language Models
http://arxiv.org/pdf/2308.14921v1
Large Language Models (LLMs) have made substantial progress in the past several months, shattering state-of-the-art benchmarks in many domains. This paper investigates LLMs' behavior with respect to gender stereotypes, a known issue for prior models. We use a simple paradigm to test the presence of gender bias, buildin...
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
parrish2021bbq
\cite{parrish2021bbq}
BBQ: A Hand-Built Bias Benchmark for Question Answering
http://arxiv.org/abs/2110.08193v2
It is well documented that NLP models learn social biases, but little work has been done on how these biases manifest in model outputs for applied tasks like question answering (QA). We introduce the Bias Benchmark for QA (BBQ), a dataset of question sets constructed by the authors that highlight attested social biases...
true
true
Parrish, Alicia and Chen, Angelica and Nangia, Nikita and Padmakumar, Vishakh and Phang, Jason and Thompson, Jana and Htut, Phu Mon and Bowman, Samuel R
2,021
null
null
null
null
BBQ: A Hand-Built Bias Benchmark for Question Answering
BBQ: A hand-built bias benchmark for question answering
https://aclanthology.org/2022.findings-acl.165/
by A Parrish · 2022 · Cited by 512 — We introduce the Bias Benchmark for QA (BBQ), a dataset of question-sets constructed by the authors that highlight attested social biases.
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
webster-etal-2018-mind
\cite{webster-etal-2018-mind}
Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns
http://arxiv.org/abs/1810.05201v1
Coreference resolution is an important task for natural language understanding, and the resolution of ambiguous pronouns a longstanding challenge. Nonetheless, existing corpora do not capture ambiguous pronouns in sufficient volume or diversity to accurately indicate the practical utility of models. Furthermore, we fin...
true
true
Webster, Kellie and Recasens, Marta and Axelrod, Vera and Baldridge, Jason
2,018
null
null
null
Transactions of the Association for Computational Linguistics
Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns
Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns
http://arxiv.org/pdf/1810.05201v1
Coreference resolution is an important task for natural language understanding, and the resolution of ambiguous pronouns a longstanding challenge. Nonetheless, existing corpora do not capture ambiguous pronouns in sufficient volume or diversity to accurately indicate the practical utility of models. Furthermore, we fin...
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
pant-dadu-2022-incorporating
\cite{pant-dadu-2022-incorporating}
Incorporating Subjectivity into Gendered Ambiguous Pronoun ({GAP}) Resolution using Style Transfer
null
null
true
false
Pant, Kartikey and Dadu, Tanvi
2,022
null
null
null
null
Incorporating Subjectivity into Gendered Ambiguous Pronoun ({GAP}) Resolution using Style Transfer
Incorporating Subjectivity into Gendered Ambiguous Pronoun (GAP ...
https://www.researchgate.net/publication/362266417_Incorporating_Subjectivity_into_Gendered_Ambiguous_Pronoun_GAP_Resolution_using_Style_Transfer
Incorporating Subjectivity into Gendered Ambiguous Pronoun (GAP) Resolution using Style Transfer ... GAP-Subjective is the same size as GAP, with 8,908 instances.
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
levy-etal-2021-collecting-large
\cite{levy-etal-2021-collecting-large}
Collecting a Large-Scale Gender Bias Dataset for Coreference Resolution and Machine Translation
http://arxiv.org/abs/2109.03858v2
Recent works have found evidence of gender bias in models of machine translation and coreference resolution using mostly synthetic diagnostic datasets. While these quantify bias in a controlled experiment, they often do so on a small scale and consist mostly of artificial, out-of-distribution sentences. In this work, w...
true
true
Levy, Shahar and Lazar, Koren and Stanovsky, Gabriel
2,021
null
null
null
null
Collecting a Large-Scale Gender Bias Dataset for Coreference Resolution and Machine Translation
[PDF] Collecting a Large-Scale Gender Bias Dataset for Coreference ...
https://aclanthology.org/2021.findings-emnlp.211.pdf
We use BUG to evaluate gender bias in various coref- erence resolution and machine translation models, finding that models tend to make
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
gawlikowski2023survey
\cite{gawlikowski2023survey}
A Survey of Uncertainty in Deep Neural Networks
http://arxiv.org/abs/2107.03342v3
Due to their increasing spread, confidence in neural network predictions became more and more important. However, basic neural networks do not deliver certainty estimates or suffer from over or under confidence. Many researchers have been working on understanding and quantifying uncertainty in a neural network's predic...
true
true
Gawlikowski, Jakob and Tassi, Cedrique Rovile Njieutcheu and Ali, Mohsin and Lee, Jongseok and Humt, Matthias and Feng, Jianxiang and Kruspe, Anna and Triebel, Rudolph and Jung, Peter and Roscher, Ribana and others
2,023
null
null
null
Artificial Intelligence Review
A Survey of Uncertainty in Deep Neural Networks
A Survey of Uncertainty in Deep Neural Networks
http://arxiv.org/pdf/2107.03342v3
Due to their increasing spread, confidence in neural network predictions became more and more important. However, basic neural networks do not deliver certainty estimates or suffer from over or under confidence. Many researchers have been working on understanding and quantifying uncertainty in a neural network's predic...
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
hu2023uncertainty
\cite{hu2023uncertainty}
Uncertainty in Natural Language Processing: Sources, Quantification, and Applications
http://arxiv.org/abs/2306.04459v1
As a main field of artificial intelligence, natural language processing (NLP) has achieved remarkable success via deep neural networks. Plenty of NLP tasks have been addressed in a unified manner, with various tasks being associated with each other through sharing the same paradigm. However, neural networks are black b...
true
true
Hu, Mengting and Zhang, Zhen and Zhao, Shiwan and Huang, Minlie and Wu, Bingzhe
2,023
null
null
null
arXiv preprint arXiv:2306.04459
Uncertainty in Natural Language Processing: Sources, Quantification, and Applications
[PDF] Uncertainty in Natural Language Processing: Sources ... - arXiv
https://arxiv.org/pdf/2306.04459
Then, we systemically review uncertainty quantification approaches and the main applications. Finally, we discuss the challenges of uncertainty.
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
huang2023look
\cite{huang2023look}
Look Before You Leap: An Exploratory Study of Uncertainty Measurement for Large Language Models
http://arxiv.org/abs/2307.10236v4
The recent performance leap of Large Language Models (LLMs) opens up new opportunities across numerous industrial applications and domains. However, erroneous generations, such as false predictions, misinformation, and hallucination made by LLMs, have also raised severe concerns for the trustworthiness of LLMs', especi...
true
true
Huang, Yuheng and Song, Jiayang and Wang, Zhijie and Zhao, Shengming and Chen, Huaming and Juefei-Xu, Felix and Ma, Lei
2,023
null
null
null
arXiv preprint arXiv:2307.10236
Look Before You Leap: An Exploratory Study of Uncertainty Measurement for Large Language Models
Look Before You Leap: An Exploratory Study of Uncertainty ... - arXiv
https://arxiv.org/abs/2307.10236
The recent performance leap of Large Language Models (LLMs) opens up new opportunities across numerous industrial applications and domains.
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
fadeeva2023lm
\cite{fadeeva2023lm}
LM-polygraph: Uncertainty estimation for language models
null
null
true
false
Fadeeva, Ekaterina and Vashurin, Roman and Tsvigun, Akim and Vazhentsev, Artem and Petrakov, Sergey and Fedyanin, Kirill and Vasilev, Daniil and Goncharova, Elizaveta and Panchenko, Alexander and Panov, Maxim and others
2,023
null
null
null
null
LM-polygraph: Uncertainty estimation for language models
LM-Polygraph: Uncertainty Estimation for Language Models
http://arxiv.org/pdf/2311.07383v1
Recent advancements in the capabilities of large language models (LLMs) have paved the way for a myriad of groundbreaking applications in various fields. However, a significant challenge arises as these models often "hallucinate", i.e., fabricate facts without providing users an apparent means to discern the veracity o...
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
kendall2017uncertainties
\cite{kendall2017uncertainties}
What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?
http://arxiv.org/abs/1703.04977v2
There are two major types of uncertainty one can model. Aleatoric uncertainty captures noise inherent in the observations. On the other hand, epistemic uncertainty accounts for uncertainty in the model -- uncertainty which can be explained away given enough data. Traditionally it has been difficult to model epistemic u...
true
true
Kendall, Alex and Gal, Yarin
2,017
null
null
null
NeurIPS
What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?
[PDF] What Uncertainties Do We Need in Bayesian Deep Learning ... - NIPS
http://papers.neurips.cc/paper/7141-what-uncertainties-do-we-need-in-bayesian-deep-learning-for-computer-vision.pdf
Quantifying uncertainty in computer vision applications can be largely divided into regression set- tings such as depth regression, and classification settings
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
bridle1990probabilistic
\cite{bridle1990probabilistic}
Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition
null
null
true
false
Bridle, John S
1,990
null
null
null
null
Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition
PROBABILISTIC INTERPRETATION OF FEEDFORWARD ...
https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=818b3279ba393e0c0aeea200652199e8f4c59942
by M COSTA · Cited by 37 — J. S. Bridle 1989, \Probabilistic interpretation of feedforward classi cation network outputs, with rela- tionships to statistical pattern recognition," in Neu-.
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
hendrycks2017a
\cite{hendrycks2017a}
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
http://arxiv.org/abs/1610.02136v3
We consider the two related problems of detecting if an example is misclassified or out-of-distribution. We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classified examples tend to have greater maximum softmax probabilities than erroneously classified and out-of-distributi...
true
true
Dan Hendrycks and Kevin Gimpel
2,017
null
null
null
null
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
A Baseline for Detecting Misclassified and Out-of- ...
https://arxiv.org/abs/1610.02136
by D Hendrycks · 2016 · Cited by 4553 — We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classified examples tend to have greater maximum softmax
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
jurafsky2000speech
\cite{jurafsky2000speech}
Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition
null
null
true
false
Jurafsky, Daniel and Martin, James H
2,000
null
null
null
null
Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition
Speech and Language Processing: An Introduction to Natural ...
https://www.amazon.com/Speech-Language-Processing-Introduction-Computational/dp/0130950696
An introduction to natural language processing, computational linguistics and speech recognition. ISBN-13: 978-0130950697, ISBN-10: 0130950696.
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
fomicheva2020unsupervised
\cite{fomicheva2020unsupervised}
Unsupervised Quality Estimation for Neural Machine Translation
http://arxiv.org/abs/2005.10608v2
Quality Estimation (QE) is an important component in making Machine Translation (MT) useful in real-world applications, as it is aimed to inform the user on the quality of the MT output at test time. Existing approaches require large amounts of expert annotated data, computation and time for training. As an alternative...
true
true
Fomicheva, Marina and Sun, Shuo and Yankovskaya, Lisa and Blain, Fr{\'e}d{\'e}ric and Guzm{\'a}n, Francisco and Fishel, Mark and Aletras, Nikolaos and Chaudhary, Vishrav and Specia, Lucia
2,020
null
null
null
null
Unsupervised Quality Estimation for Neural Machine Translation
Unsupervised Quality Estimation for Neural Machine Translation
http://arxiv.org/pdf/2005.10608v2
Quality Estimation (QE) is an important component in making Machine Translation (MT) useful in real-world applications, as it is aimed to inform the user on the quality of the MT output at test time. Existing approaches require large amounts of expert annotated data, computation and time for training. As an alternative...
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
malinin2021uncertainty
\cite{malinin2021uncertainty}
Uncertainty Estimation in Autoregressive Structured Prediction
http://arxiv.org/abs/2002.07650v5
Uncertainty estimation is important for ensuring safety and robustness of AI systems. While most research in the area has focused on un-structured prediction tasks, limited work has investigated general uncertainty estimation approaches for structured prediction. Thus, this work aims to investigate uncertainty estimati...
true
true
Malinin, Andrey and Gales, Mark
2,021
null
null
null
null
Uncertainty Estimation in Autoregressive Structured Prediction
Uncertainty Estimation in Autoregressive Structured Prediction
http://arxiv.org/pdf/2002.07650v5
Uncertainty estimation is important for ensuring safety and robustness of AI systems. While most research in the area has focused on un-structured prediction tasks, limited work has investigated general uncertainty estimation approaches for structured prediction. Thus, this work aims to investigate uncertainty estimati...
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
vovk2005algorithmic
\cite{vovk2005algorithmic}
Algorithmic learning in a random world
null
null
true
false
Vovk, Vladimir and Gammerman, Alexander and Shafer, Glenn
2,005
null
null
null
null
Algorithmic learning in a random world
Algorithmic Learning in a Random World
https://www.amazon.ca/Algorithmic-Learning-Random-World-Vladimir/dp/0387001522
Algorithmic Learning in a Random Worlddescribes recent theoretical and experimental developments in building computable approximations to Kolmogorov's
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
gal2016dropout
\cite{gal2016dropout}
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
http://arxiv.org/abs/1506.02142v6
Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computa...
true
true
Gal, Yarin and Ghahramani, Zoubin
2,016
null
null
null
null
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Representing Model Uncertainty in Deep Learning - arXiv
https://arxiv.org/abs/1506.02142
In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
yu2022learning
\cite{yu2022learning}
Learning Uncertainty for Unknown Domains with Zero-Target-Assumption
null
null
true
false
Yu, Yu and Sajjad, Hassan and Xu, Jia
2,022
null
null
null
null
Learning Uncertainty for Unknown Domains with Zero-Target-Assumption
Learning Uncertainty for Unknown Domains with Zero-Target ...
https://openreview.net/forum?id=pWVASryOyFw
In this paper, the authors propose to use a Maximum-Entropy Rewarded Reinforcement Learning framework to select training data for NLP tasks, the goal of which is to maximize generalization. Weaknesses: The authors only proved the role of entropy in selecting data, but this paper does not elaborate on the motivation and...
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
kuhn2023semantic
\cite{kuhn2023semantic}
Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation in Natural Language Generation
http://arxiv.org/abs/2302.09664v3
We introduce a method to measure uncertainty in large language models. For tasks like question answering, it is essential to know when we can trust the natural language outputs of foundation models. We show that measuring uncertainty in natural language is challenging because of "semantic equivalence" -- different sent...
true
true
Kuhn, Lorenz and Gal, Yarin and Farquhar, Sebastian
2,023
null
null
null
null
Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation in Natural Language Generation
Semantic Uncertainty: Linguistic Invariances for ... - OpenReview
https://openreview.net/forum?id=VD-AYtP0dve
Summary: The paper proposes an approach called semantic entropy, which incorporates linguistic invariances for uncertainty estimation in NLG.
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
duan2023shifting
\cite{duan2023shifting}
Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models
http://arxiv.org/abs/2307.01379v3
Large Language Models (LLMs) show promising results in language generation and instruction following but frequently "hallucinate", making their outputs less reliable. Despite Uncertainty Quantification's (UQ) potential solutions, implementing it accurately within LLMs is challenging. Our research introduces a simple he...
true
true
Duan, Jinhao and Cheng, Hao and Wang, Shiqi and Wang, Chenan and Zavalny, Alex and Xu, Renjing and Kailkhura, Bhavya and Xu, Kaidi
2,024
null
null
null
null
Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models
Shifting Attention to Relevance: Towards the Predictive ...
https://arxiv.org/abs/2307.01379
by J Duan · 2023 · Cited by 172 — Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models. Authors:Jinhao Duan, Hao
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
kadavath2022language
\cite{kadavath2022language}
Language Models (Mostly) Know What They Know
http://arxiv.org/abs/2207.05221v4
We study whether language models can evaluate the validity of their own claims and predict which questions they will be able to answer correctly. We first show that larger models are well-calibrated on diverse multiple choice and true/false questions when they are provided in the right format. Thus we can approach self...
true
true
Kadavath, Saurav and Conerly, Tom and Askell, Amanda and Henighan, Tom and Drain, Dawn and Perez, Ethan and Schiefer, Nicholas and Hatfield-Dodds, Zac and DasSarma, Nova and Tran-Johnson, Eli and others
2,022
null
null
null
arXiv preprint arXiv:2207.05221
Language Models (Mostly) Know What They Know
Language Models (Mostly) Know What They Know
http://arxiv.org/pdf/2207.05221v4
We study whether language models can evaluate the validity of their own claims and predict which questions they will be able to answer correctly. We first show that larger models are well-calibrated on diverse multiple choice and true/false questions when they are provided in the right format. Thus we can approach self...
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
malinin2018predictive
\cite{malinin2018predictive}
Predictive Uncertainty Estimation via Prior Networks
http://arxiv.org/abs/1802.10501v4
Estimating how uncertain an AI system is in its predictions is important to improve the safety of such systems. Uncertainty in predictive can result from uncertainty in model parameters, irreducible data uncertainty and uncertainty due to distributional mismatch between the test and training data distributions. Differe...
true
true
Malinin, Andrey and Gales, Mark
2,018
null
null
null
null
Predictive Uncertainty Estimation via Prior Networks
Predictive Uncertainty Estimation via Prior Networks
http://arxiv.org/pdf/1802.10501v4
Estimating how uncertain an AI system is in its predictions is important to improve the safety of such systems. Uncertainty in predictive can result from uncertainty in model parameters, irreducible data uncertainty and uncertainty due to distributional mismatch between the test and training data distributions. Differe...
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
darrin2022rainproof
\cite{darrin2022rainproof}
Rainproof: An Umbrella To Shield Text Generators From Out-Of-Distribution Data
http://arxiv.org/abs/2212.09171v2
Implementing effective control mechanisms to ensure the proper functioning and security of deployed NLP models, from translation to chatbots, is essential. A key ingredient to ensure safe system behaviour is Out-Of-Distribution (OOD) detection, which aims to detect whether an input sample is statistically far from the ...
true
true
Darrin, Maxime and Piantanida, Pablo and Colombo, Pierre
2,023
null
null
null
null
Rainproof: An Umbrella To Shield Text Generators From Out-Of-Distribution Data
RAINPROOF: An umbrella to shield text generators from ...
https://aclanthology.org/2023.emnlp-main.357.pdf
by M Darrin · 2023 · Cited by 39 — RAINPROOF is a Relative informAItioN Projection OOD detection framework that shields text generators from out-of-distribution data, using soft-
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
vashurin2025benchmarking
\cite{vashurin2025benchmarking}
Benchmarking uncertainty quantification methods for large language models with lm-polygraph
null
null
true
false
Vashurin, Roman and Fadeeva, Ekaterina and Vazhentsev, Artem and Rvanova, Lyudmila and Vasilev, Daniil and Tsvigun, Akim and Petrakov, Sergey and Xing, Rui and Sadallah, Abdelrahman and Grishchenkov, Kirill and others
2,025
null
null
null
Transactions of the Association for Computational Linguistics
Benchmarking uncertainty quantification methods for large language models with lm-polygraph
Benchmarking Uncertainty Quantification Methods for Large ...
https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00737/128713/Benchmarking-Uncertainty-Quantification-Methods
Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph | Transactions of the Association for Computational Linguistics | MIT Press Roman Vashurin, Ekaterina Fadeeva, Artem Vazhentsev, Lyudmila Rvanova, Daniil Vasilev, Akim Tsvigun, Sergey Petrakov, Rui Xing, Abdelrahman Sadallah, Ki...
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
santilli2024spurious
\cite{santilli2024spurious}
On a spurious interaction between uncertainty scores and answer evaluation metrics in generative qa tasks
null
null
true
false
Santilli, Andrea and Xiong, Miao and Kirchhof, Michael and Rodriguez, Pau and Danieli, Federico and Suau, Xavier and Zappella, Luca and Williamson, Sinead and Golinski, Adam
2,024
null
null
null
null
On a spurious interaction between uncertainty scores and answer evaluation metrics in generative qa tasks
On a Spurious Interaction between Uncertainty Scores & ...
https://openreview.net/pdf?id=jGtL0JFdeD
by A Santilli · Cited by 3 — In this paper, we highlight that some UQ methods and answer evaluation metrics are spuriously correlated via the response length, which leads to falsely
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
santilli2025revisiting
\cite{santilli2025revisiting}
Revisiting Uncertainty Quantification Evaluation in Language Models: Spurious Interactions with Response Length Bias Results
http://arxiv.org/abs/2504.13677v2
Uncertainty Quantification (UQ) in Language Models (LMs) is key to improving their safety and reliability. Evaluations often use metrics like AUROC to assess how well UQ methods (e.g., negative sequence probabilities) correlate with task correctness functions (e.g., ROUGE-L). We show that mutual biases--when both UQ me...
true
true
Santilli, Andrea and Golinski, Adam and Kirchhof, Michael and Danieli, Federico and Blaas, Arno and Xiong, Miao and Zappella, Luca and Williamson, Sinead
2,025
null
null
null
arXiv preprint arXiv:2504.13677
Revisiting Uncertainty Quantification Evaluation in Language Models: Spurious Interactions with Response Length Bias Results
Spurious Interactions with Response Length Bias Results
https://arxiv.org/pdf/2504.13677?
by A Santilli · 2025 · Cited by 3 — Uncertainty Quantification (UQ) in Language. Models (LMs) is key to improving their safety and reliability. Evaluations often use metrics.
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
mehta2024evaluating
\cite{mehta2024evaluating}
Evaluating the Fairness of Deep Learning Uncertainty Estimates in Medical Image Analysis
http://arxiv.org/abs/2303.03242v1
Although deep learning (DL) models have shown great success in many medical image analysis tasks, deployment of the resulting models into real clinical contexts requires: (1) that they exhibit robustness and fairness across different sub-populations, and (2) that the confidence in DL model predictions be accurately exp...
true
true
Mehta, Raghav and Shui, Changjian and Arbel, Tal
2,024
null
null
null
null
Evaluating the Fairness of Deep Learning Uncertainty Estimates in Medical Image Analysis
Evaluating the Fairness of Deep Learning Uncertainty Estimates in ...
https://arxiv.org/abs/2303.03242
In this work, we present the first exploration of the effect of popular fairness models on overcoming biases across subgroups in medical image analysis.
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
kuzmin-etal-2023-uncertainty
\cite{kuzmin-etal-2023-uncertainty}
Uncertainty Estimation for Debiased Models: Does Fairness Hurt Reliability?
null
null
true
false
Kuzmin, Gleb and Vazhentsev, Artem and Shelmanov, Artem and Han, Xudong and Suster, Simon and Panov, Maxim and Panchenko, Alexander and Baldwin, Timothy
2,023
null
https://aclanthology.org/2023.ijcnlp-main.48/
10.18653/v1/2023.ijcnlp-main.48
null
Uncertainty Estimation for Debiased Models: Does Fairness Hurt Reliability?
Uncertainty Estimation for Debiased Models: Does Fairness Hurt ...
https://aclanthology.org/2023.ijcnlp-main.48/
Uncertainty Estimation for Debiased Models: Does Fairness Hurt Reliability?. In Proceedings of the 13th International Joint Conference on Natural Language
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
kuzucu2023uncertainty
\cite{kuzucu2023uncertainty}
Uncertainty as a Fairness Measure
null
null
true
false
Kuzucu, Selim and Cheong, Jiaee and Gunes, Hatice and Kalkan, Sinan
2,023
null
null
null
arXiv preprint arXiv:2312.11299
Uncertainty as a Fairness Measure
[2312.11299] Uncertainty-based Fairness Measures - arXiv
https://arxiv.org/abs/2312.11299
We introduce new fairness measures based on different types of uncertainties, namely, aleatoric uncertainty and epistemic uncertainty.
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
kaiser2022uncertainty
\cite{kaiser2022uncertainty}
Uncertainty-aware predictive modeling for fair data-driven decisions
null
null
true
false
Kaiser, Patrick and Kern, Christoph and R{\"u}gamer, David
2,022
null
null
null
arXiv preprint arXiv:2211.02730
Uncertainty-aware predictive modeling for fair data-driven decisions
Uncertainty-aware predictive modeling for fair data-driven ...
https://openreview.net/forum?id=8DXj-ze0x_s
Uncertainty-aware predictive modeling for fair data-driven decisions | OpenReview Blind Submission by TSRML • Uncertainty-aware predictive modeling for fair data-driven decisions 23 Oct 2022, 01:52 NeurIPS 2022 Workshop TSRML Paper72 Decision Readers: EveryoneShow Revisions The authors highlight the importance of acco...
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
2505.23996v1
tahir2023fairness
\cite{tahir2023fairness}
Fairness through Aleatoric Uncertainty
http://arxiv.org/abs/2304.03646v2
We propose a simple yet effective solution to tackle the often-competing goals of fairness and utility in classification tasks. While fairness ensures that the model's predictions are unbiased and do not discriminate against any particular group or individual, utility focuses on maximizing the model's predictive perfor...
true
true
Tahir, Anique and Cheng, Lu and Liu, Huan
2,023
null
null
null
null
Fairness through Aleatoric Uncertainty
Fairness through Aleatoric Uncertainty
http://arxiv.org/pdf/2304.03646v2
We propose a simple yet effective solution to tackle the often-competing goals of fairness and utility in classification tasks. While fairness ensures that the model's predictions are unbiased and do not discriminate against any particular group or individual, utility focuses on maximizing the model's predictive perfor...
Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis
2505.23353v1
Mcal
\cite{Mcal}
Synthetic quantitative MRI through relaxometry modelling
null
null
true
false
Callaghan, Martina F. and Mohammadi, Siawoosh and Weiskopf, Nikolaus
2,016
null
https://dx.doi.org/10.1002/nbm.3658
10.1002/nbm.3658
NMR in Biomedicine
Synthetic quantitative MRI through relaxometry modelling
Synthetic quantitative MRI through relaxometry modelling - PMC
https://pmc.ncbi.nlm.nih.gov/articles/PMC5132086/
The proposed synthetic qMRI approach shows promise for furthering our understanding of the inter‐relation of MRI parameters and for maximizing
Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis
2505.23353v1
Jand
\cite{Jand}
Synthetic MRI for stroke: a qualitative and quantitative pilot study
null
null
true
false
André, Joachim and Barrit, Sami and Jissendi, Patrice
2,022
null
null
10.1038/s41598-022-15204-8
Scientific Reports
Synthetic MRI for stroke: a qualitative and quantitative pilot study
(PDF) Synthetic MRI for stroke: a qualitative and quantitative pilot study
https://www.researchgate.net/publication/361826097_Synthetic_MRI_for_stroke_a_qualitative_and_quantitative_pilot_study
Synthetic MR provides qualitative and quantitative multi-parametric data about tissue properties. in a single acquisition. Its use in stroke imaging is not
Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis
2505.23353v1
Emoy
\cite{Emoy}
A deep learning approach for synthetic MRI based on two routine sequences and training with synthetic data
null
null
true
false
Moya-Sáez, Elisa and Peña-Nogales, Óscar and Luis-García, Rodrigo de and Alberola-López, Carlos
2,021
null
https://www.sciencedirect.com/science/article/pii/S0169260721004454
https://doi.org/10.1016/j.cmpb.2021.106371
Computer Methods and Programs in Biomedicine
A deep learning approach for synthetic MRI based on two routine sequences and training with synthetic data
A deep learning approach for synthetic MRI based on two routine ...
https://pubmed.ncbi.nlm.nih.gov/34525411/
**Conclusions:** These results show that our approach is able to provide realistic parametric maps and weighted images out of a CNN that (a) is trained with a synthetic dataset and (b) needs only two inputs, which are in turn obtained from a common full-brain acquisition that takes less than 8 min of scan time. * Bra...
Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis
2505.23353v1
Kgop
\cite{Kgop}
Synthetic data in generalizable, learning-based neuroimaging
null
null
true
false
Gopinath, Karthik and Hoopes, Andrew and Alexander, Daniel C. and Arnold, Steven E. and Balbastre, Yael and Billot, Benjamin and Casamitjana, Adrià and Cheng, You and Chua, Russ Yue Zhi and Edlow, Brian L. and Fischl, Bruce and Gazula, Harshvardhan and Hoffmann, Malte and Keene, C. Dirk and Kim, Seunghoi and Kimberly, ...
2,024
11
https://doi.org/10.1162/imag\_a\_00337
10.1162/imag_a_00337
Imaging Neuroscience
Synthetic data in generalizable, learning-based neuroimaging
Synthetic data in generalizable, learning-based ...
https://direct.mit.edu/imag/article/doi/10.1162/imag_a_00337/124867/Synthetic-data-in-generalizable-learning-based
by K Gopinath · 2024 · Cited by 17 — Synthetic data have emerged as an attractive option for developing machine-learning methods in human neuroimaging, particularly in magnetic resonance imaging (
Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis
2505.23353v1
Jigl
\cite{Jigl}
SynthSR: A public AI tool to turn heterogeneous clinical brain scans into high-resolution T1-weighted images for 3D morphometry
null
null
true
false
Juan E. Iglesias and Benjamin Billot and Yaël Balbastre and Colin Magdamo and Steven E. Arnold and Sudeshna Das and Brian L. Edlow and Daniel C. Alexander and Polina Golland and Bruce Fischl
2,023
null
https://www.science.org/doi/abs/10.1126/sciadv.add3607
10.1126/sciadv.add3607
Science Advances
SynthSR: A public AI tool to turn heterogeneous clinical brain scans into high-resolution T1-weighted images for 3D morphometry
SynthSR: A public AI tool to turn heterogeneous clinical brain scans ...
https://pubmed.ncbi.nlm.nih.gov/36724222/
Missing: 04/08/2025
Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis
2505.23353v1
jwil
\cite{jwil}
Limits of Transfer Learning
http://arxiv.org/abs/2006.12694v1
Transfer learning involves taking information and insight from one problem domain and applying it to a new problem domain. Although widely used in practice, theory for transfer learning remains less well-developed. To address this, we prove several novel results related to transfer learning, showing the need to careful...
true
true
Jake Williams and Abel Tadesse and Tyler Sam and Huey Sun and George D. Montanez
2,020
null
https://arxiv.org/abs/2006.12694
null
null
Limits of Transfer Learning
Limits of Transfer Learning
http://arxiv.org/pdf/2006.12694v1
Transfer learning involves taking information and insight from one problem domain and applying it to a new problem domain. Although widely used in practice, theory for transfer learning remains less well-developed. To address this, we prove several novel results related to transfer learning, showing the need to careful...
Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis
2505.23353v1
weli
\cite{weli}
Detecting Alzheimer's Disease on Small Dataset: A Knowledge Transfer Perspective
null
null
true
false
Li, Wei and Zhao, Yifei and Chen, Xi and Xiao, Yang and Qin, Yuanyuan
2,019
null
null
10.1109/JBHI.2018.2839771
IEEE Journal of Biomedical and Health Informatics
Detecting Alzheimer's Disease on Small Dataset: A Knowledge Transfer Perspective
Detecting Alzheimer's Disease on Small Dataset
http://ieeexplore.ieee.org/document/8362917/
In addition, we proposed an effective knowledge transfer method to diminish the disparity among different datasets and improve the
Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis
2505.23353v1
jval
\cite{jval}
Transfer Learning in Magnetic Resonance Brain Imaging: a Systematic Review
http://arxiv.org/abs/2102.01530v2
Transfer learning refers to machine learning techniques that focus on acquiring knowledge from related tasks to improve generalization in the tasks of interest. In MRI, transfer learning is important for developing strategies that address the variation in MR images. Additionally, transfer learning is beneficial to re-u...
true
true
Valverde, Juan Miguel and Imani, Vandad and Abdollahzadeh, Ali and De Feo, Riccardo and Prakash, Mithilesh and Ciszek, Robert and Tohka, Jussi
2,021
null
http://dx.doi.org/10.3390/jimaging7040066
10.3390/jimaging7040066
Journal of Imaging
Transfer Learning in Magnetic Resonance Brain Imaging: a Systematic Review
Transfer Learning in Magnetic Resonance Brain Imaging
https://www.researchgate.net/publication/350576269_Transfer_Learning_in_Magnetic_Resonance_Brain_Imaging_A_Systematic_Review
The aim of this review is to identify research directions, gaps in knowledge, applications, and widely used strategies among the transfer learning approaches
Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis
2505.23353v1
smat
\cite{smat}
Employing deep learning and transfer learning for accurate brain tumor detection
null
null
true
false
Mathivanan, Sandeep Kumar and Sonaimuthu, Sridevi and Murugesan, Sankar and Rajadurai, Hariharan and Shivahare, Basu Dev and Shah, Mohd Asif
2,024
null
null
10.1038/s41598-024-57970-7
Scientific Reports
Employing deep learning and transfer learning for accurate brain tumor detection
(PDF) Employing deep learning and transfer learning for accurate ...
https://www.researchgate.net/publication/379337705_Employing_deep_learning_and_transfer_learning_for_accurate_brain_tumor_detection
This study delves into the potential of deep transfer learning architectures to elevate the accuracy of brain tumor diagnosis. Transfer learning
Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis
2505.23353v1
Vtha
\cite{Vtha}
SinGAN-Seg: Synthetic training data generation for medical image segmentation
null
null
true
false
Thambawita, Vajira AND Salehi, Pegah AND Sheshkal, Sajad Amouei AND Hicks, Steven A. AND Hammer, Hugo L. AND Parasa, Sravanthi AND Lange, Thomas de AND Halvorsen, Pål AND Riegler, Michael A.
2,022
05
https://doi.org/10.1371/journal.pone.0267976
10.1371/journal.pone.0267976
PLOS ONE
SinGAN-Seg: Synthetic training data generation for medical image segmentation
SinGAN-Seg: Synthetic training data generation for medical image segmentation
http://arxiv.org/pdf/2107.00471v2
Analyzing medical data to find abnormalities is a time-consuming and costly task, particularly for rare abnormalities, requiring tremendous efforts from medical experts. Artificial intelligence has become a popular tool for the automatic processing of medical data, acting as a supportive tool for doctors. However, the ...
Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis
2505.23353v1
Awah
\cite{Awah}
CovidGAN: Data Augmentation Using Auxiliary Classifier GAN for Improved Covid-19 Detection
http://arxiv.org/abs/2103.05094v1
Coronavirus (COVID-19) is a viral disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The spread of COVID-19 seems to have a detrimental effect on the global economy and health. A positive chest X-ray of infected patients is a crucial step in the battle against COVID-19. Early results sugges...
true
true
Waheed, Abdul and Goyal, Muskan and Gupta, Deepak and Khanna, Ashish and Al-Turjman, Fadi and Pinheiro, Plácido Rogerio
2,020
null
null
10.1109/ACCESS.2020.2994762
IEEE Access
CovidGAN: Data Augmentation Using Auxiliary Classifier GAN for Improved Covid-19 Detection
(PDF) CovidGAN: Data Augmentation using Auxiliary Classifier GAN ...
https://www.researchgate.net/publication/341401062_CovidGAN_Data_Augmentation_using_Auxiliary_Classifier_GAN_for_Improved_Covid-19_Detection
By adding synthetic images produced by CovidGAN, the accuracy increased to 95%. We hope this method will speed up COVID-19 detection and lead to
Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis
2505.23353v1
Bahm
\cite{Bahm}
Brain Tumor Classification Using a Combination of Variational Autoencoders and Generative Adversarial Networks
null
null
true
false
Ahmad, Bilal and Sun, Jun and You, Qi and Palade, Vasile and Mao, Zhongjie
2,022
null
https://www.mdpi.com/2227-9059/10/2/223
null
Biomedicines
Brain Tumor Classification Using a Combination of Variational Autoencoders and Generative Adversarial Networks
(PDF) Brain Tumor Classification Using a Combination of Variational ...
https://www.researchgate.net/publication/358017457_Brain_Tumor_Classification_Using_a_Combination_of_Variational_Autoencoders_and_Generative_Adversarial_Networks
This paper proposes a framework based on unsupervised deep generative neural networks to solve this limitation. We combine two generative models in the proposed
Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis
2505.23353v1
Hzha
\cite{Hzha}
QSMRim-Net: Imbalance-aware learning for identification of chronic active multiple sclerosis lesions on quantitative susceptibility maps
null
null
true
false
Zhang, Hang and Nguyen, Thanh D. and Zhang, Jinwei and Marcille, Melanie and Spincemaille, Pascal and Wang, Yi and Gauthier, Susan A. and Sweeney, Elizabeth M.
2,022
null
https://www.sciencedirect.com/science/article/pii/S2213158222000444
https://doi.org/10.1016/j.nicl.2022.102979
NeuroImage: Clinical
QSMRim-Net: Imbalance-aware learning for identification of chronic active multiple sclerosis lesions on quantitative susceptibility maps
QSMRim-Net: Imbalance-aware learning for identification of chronic ...
https://pubmed.ncbi.nlm.nih.gov/35247730/
QSMRim-Net: Imbalance-aware learning for identification of chronic active multiple sclerosis lesions on quantitative susceptibility maps - PubMed We present QSMRim-Net, a data imbalance-aware deep neural network that fuses lesion-level radiomic and convolutional image features for automated identification of rim + lesi...
Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis
2505.23353v1
Ddab
\cite{Ddab}
DeepSMOTE: Fusing Deep Learning and SMOTE for Imbalanced Data
http://arxiv.org/abs/2105.02340v1
Despite over two decades of progress, imbalanced data is still considered a significant challenge for contemporary machine learning models. Modern advances in deep learning have magnified the importance of the imbalanced data problem. The two main approaches to address this issue are based on loss function modification...
true
true
Damien Dablain and Bartosz Krawczyk and Nitesh V. Chawla
2,021
null
https://arxiv.org/abs/2105.02340
null
null
DeepSMOTE: Fusing Deep Learning and SMOTE for Imbalanced Data
DeepSMOTE: Fusing Deep Learning and SMOTE for Imbalanced Data
http://arxiv.org/pdf/2105.02340v1
Despite over two decades of progress, imbalanced data is still considered a significant challenge for contemporary machine learning models. Modern advances in deep learning have magnified the importance of the imbalanced data problem. The two main approaches to address this issue are based on loss function modification...
Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis
2505.23353v1
Msal
\cite{Msal}
Multiple Sclerosis Lesion Synthesis in MRI using an encoder-decoder U-NET
http://arxiv.org/abs/1901.05733v1
In this paper, we propose generating synthetic multiple sclerosis (MS) lesions on MRI images with the final aim to improve the performance of supervised machine learning algorithms, therefore avoiding the problem of the lack of available ground truth. We propose a two-input two-output fully convolutional neural network...
true
true
Salem, Mostafa and Valverde, Sergi and Cabezas, Mariano and Pareto, Deborah and Oliver, Arnau and Salvi, Joaquim and Rovira, Àlex and Lladó, Xavier
2,019
null
null
10.1109/ACCESS.2019.2900198
IEEE Access
Multiple Sclerosis Lesion Synthesis in MRI using an encoder-decoder U-NET
(PDF) Multiple Sclerosis Lesion Synthesis in MRI using an encoder ...
https://www.researchgate.net/publication/331238531_Multiple_Sclerosis_Lesion_Synthesis_in_MRI_using_an_encoder-decoder_U-NET
In this paper, we propose generating synthetic multiple sclerosis (MS) lesions on MRI images with the final aim to improve the performance of supervised machine
Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis
2505.23353v1
Igoo
\cite{Igoo}
Generative Adversarial Networks
http://arxiv.org/abs/1406.2661v1
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training...
true
true
Ian J. Goodfellow and Jean Pouget-Abadie and Mehdi Mirza and Bing Xu and David Warde-Farley and Sherjil Ozair and Aaron Courville and Yoshua Bengio
2,014
null
https://arxiv.org/abs/1406.2661
null
null
Generative Adversarial Networks
Generative Adversarial Networks
http://arxiv.org/pdf/1406.2661v1
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training...
Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis
2505.23353v1
Wxia
\cite{Wxia}
GAN Inversion: A Survey
http://arxiv.org/abs/2101.05278v5
GAN inversion aims to invert a given image back into the latent space of a pretrained GAN model, for the image to be faithfully reconstructed from the inverted code by the generator. As an emerging technique to bridge the real and fake image domains, GAN inversion plays an essential role in enabling the pretrained GAN ...
true
true
Weihao Xia and Yulun Zhang and Yujiu Yang and Jing-Hao Xue and Bolei Zhou and Ming-Hsuan Yang
2,022
null
https://arxiv.org/abs/2101.05278
null
null
GAN Inversion: A Survey
GAN Inversion: A Survey
http://arxiv.org/pdf/2101.05278v5
GAN inversion aims to invert a given image back into the latent space of a pretrained GAN model, for the image to be faithfully reconstructed from the inverted code by the generator. As an emerging technique to bridge the real and fake image domains, GAN inversion plays an essential role in enabling the pretrained GAN ...
Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis
2505.23353v1
Mmir
\cite{Mmir}
Conditional Generative Adversarial Nets
http://arxiv.org/abs/1411.1784v1
Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this...
true
true
Mehdi Mirza and Simon Osindero
2,014
null
http://arxiv.org/abs/1411.1784
null
CoRR
Conditional Generative Adversarial Nets
Conditional Generative Adversarial Nets
http://arxiv.org/pdf/1411.1784v1
Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this...
Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis
2505.23353v1
Kthe
\cite{Kthe}
Robustness of Conditional GANs to Noisy Labels
http://arxiv.org/abs/1811.03205v1
We study the problem of learning conditional generators from noisy labeled samples, where the labels are corrupted by random noise. A standard training of conditional GANs will not only produce samples with wrong labels, but also generate poor quality samples. We consider two scenarios, depending on whether the noise m...
true
true
Kiran Koshy Thekumparampil and Ashish Khetan and Zinan Lin and Sewoong Oh
2,018
null
https://arxiv.org/abs/1811.03205
null
null
Robustness of Conditional GANs to Noisy Labels
Robustness of Conditional GANs to Noisy Labels
http://arxiv.org/pdf/1811.03205v1
We study the problem of learning conditional generators from noisy labeled samples, where the labels are corrupted by random noise. A standard training of conditional GANs will not only produce samples with wrong labels, but also generate poor quality samples. We consider two scenarios, depending on whether the noise m...
Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis
2505.23353v1
Wehua
\cite{Wehua}
Correcting Noisy Multilabel Predictions: Modeling Label Noise through Latent Space Shifts
http://arxiv.org/abs/2502.14281v3
Noise in data appears to be inevitable in most real-world machine learning applications and would cause severe overfitting problems. Not only can data features contain noise, but labels are also prone to be noisy due to human input. In this paper, rather than noisy label learning in multiclass classifications, we inste...
true
true
Weipeng Huang and Qin Li and Yang Xiao and Cheng Qiao and Tie Cai and Junwei Liao and Neil J. Hurley and Guangyuan Piao
2,025
null
https://arxiv.org/abs/2502.14281
null
null
Correcting Noisy Multilabel Predictions: Modeling Label Noise through Latent Space Shifts
[PDF] Correcting Noisy Multilabel Predictions: Modeling Label Noise ...
http://arxiv.org/pdf/2502.14281
Once the shifted latent variable still locates in the right latent space, the generated label noise will also follow the pattern. (in particular
Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis
2505.23353v1
Hbae
\cite{Hbae}
From Noisy Prediction to True Label: Noisy Prediction Calibration via Generative Model
http://arxiv.org/abs/2205.00690v3
Noisy labels are inevitable yet problematic in machine learning society. It ruins the generalization of a classifier by making the classifier over-fitted to noisy labels. Existing methods on noisy label have focused on modifying the classifier during the training procedure. It has two potential problems. First, these m...
true
true
HeeSun Bae and Seungjae Shin and Byeonghu Na and JoonHo Jang and Kyungwoo Song and Il-Chul Moon
2,022
null
https://arxiv.org/abs/2205.00690
null
null
From Noisy Prediction to True Label: Noisy Prediction Calibration via Generative Model
[PDF] Noisy Prediction Calibration via Generative Model
https://icml.cc/media/icml-2022/Slides/18350_oZIPQgX.pdf
NPC models the relation between output of a classifier and the true label via generative model. NPC consistently boosts the classification performances of pre-
Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis
2505.23353v1
Vkel
\cite{Vkel}
Prior Image-Constrained Reconstruction using Style-Based Generative Models
http://arxiv.org/abs/2102.12525v2
Obtaining a useful estimate of an object from highly incomplete imaging measurements remains a holy grail of imaging science. Deep learning methods have shown promise in learning object priors or constraints to improve the conditioning of an ill-posed imaging inverse problem. In this study, a framework for estimating a...
true
true
Kelkar, Varun A and Anastasio, Mark
2,021
18--24 Jul
https://proceedings.mlr.press/v139/kelkar21a.html
null
null
Prior Image-Constrained Reconstruction using Style-Based Generative Models
Prior Image-Constrained Reconstruction using Style-Based ...
http://proceedings.mlr.press/v139/kelkar21a/kelkar21a.pdf
by VA Kelkar · 2021 · Cited by 33 — Style-based generative models have been known to be able to control individual semantic features, or styles, in an image by varying the disentangled. Page 2
Pre-Training Curriculum for Multi-Token Prediction in Language Models
2505.22757v1
bengio2009curriculum
\cite{bengio2009curriculum}
Curriculum learning
null
null
true
false
Bengio, Yoshua and Louradour, J\'{e}r\^{o}me and Collobert, Ronan and Weston, Jason
2,009
null
https://doi.org/10.1145/1553374.1553380
10.1145/1553374.1553380
null
Curriculum learning
Curriculum learning
https://en.wikipedia.org/wiki/Curriculum_learning
Curriculum learning is a technique in machine learning in which a model is trained on examples of increasing difficulty.
Pre-Training Curriculum for Multi-Token Prediction in Language Models
2505.22757v1
cl_survey
\cite{cl_survey}
Curriculum Learning: A Survey
http://arxiv.org/abs/2101.10382v3
Training machine learning models in a meaningful order, from the easy samples to the hard ones, using curriculum learning can provide performance improvements over the standard training approach based on random data shuffling, without any additional computational costs. Curriculum learning strategies have been successf...
true
true
Petru Soviany and Radu Tudor Ionescu and Paolo Rota and Nicu Sebe
2,022
null
https://arxiv.org/abs/2101.10382
null
null
Curriculum Learning: A Survey
Curriculum Learning: A Survey
http://arxiv.org/pdf/2101.10382v3
Training machine learning models in a meaningful order, from the easy samples to the hard ones, using curriculum learning can provide performance improvements over the standard training approach based on random data shuffling, without any additional computational costs. Curriculum learning strategies have been successf...
Pre-Training Curriculum for Multi-Token Prediction in Language Models
2505.22757v1
cl_nlu
\cite{cl_nlu}
Curriculum Learning for Natural Language Understanding
null
null
true
false
Xu, Benfeng and Zhang, Licheng and Mao, Zhendong and Wang, Quan and Xie, Hongtao and Zhang, Yongdong
2,020
null
https://aclanthology.org/2020.acl-main.542
10.18653/v1/2020.acl-main.542
null
Curriculum Learning for Natural Language Understanding
[PDF] Curriculum Learning for Natural Language Understanding - Digie
https://api.digie.ai/publications/Curriculum-Learning-for-NLU.pdf
Natural Language Understanding (NLU), which re- quires machines to understand and reason with hu- man language, is a crucial yet challenging problem. Recently,
Pre-Training Curriculum for Multi-Token Prediction in Language Models
2505.22757v1
cl_bert
\cite{cl_bert}
Pre-training a {BERT} with Curriculum Learning by Increasing Block-Size of Input Text
null
null
true
false
Nagatsuka, Koichi and Broni-Bediako, Clifford and Atsumi, Masayasu
2,021
null
https://aclanthology.org/2021.ranlp-1.112
null
null
Pre-training a {BERT} with Curriculum Learning by Increasing Block-Size of Input Text
Pre-training a BERT with Curriculum Learning by Increasing Block ...
https://aclanthology.org/2021.ranlp-1.112/
We propose a new CL method which gradually increases the block-size of input text for training the self-attention mechanism of BERT and its variants.
Pre-Training Curriculum for Multi-Token Prediction in Language Models
2505.22757v1
bert_lrc
\cite{bert_lrc}
Modeling Easiness for Training Transformers with Curriculum Learning
null
null
true
false
Ranaldi, Leonardo and Pucci, Giulia and Zanzotto, Fabio Massimo
2,023
null
https://aclanthology.org/2023.ranlp-1.101
null
null
Modeling Easiness for Training Transformers with Curriculum Learning
Modeling Easiness for Training Transformers with Curriculum ...
https://aclanthology.org/2023.ranlp-1.101/
In this paper, building on Curriculum Learning, we propose a novel, linguistically motivated measure to determine example complexity for organizing examples
Pre-Training Curriculum for Multi-Token Prediction in Language Models
2505.22757v1
orca
\cite{orca}
Orca: Progressive Learning from Complex Explanation Traces of GPT-4
http://arxiv.org/abs/2306.02707v1
Recent research has focused on enhancing the capability of smaller models through imitation learning, drawing on the outputs generated by large foundation models (LFMs). A number of issues impact the quality of these models, ranging from limited imitation signals from shallow LFM outputs; small scale homogeneous traini...
true
true
Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah
2,023
null
https://arxiv.org/abs/2306.02707
null
null
Orca: Progressive Learning from Complex Explanation Traces of GPT-4
Orca: Progressive Learning from Complex Explanation Traces of GPT-4
http://arxiv.org/pdf/2306.02707v1
Recent research has focused on enhancing the capability of smaller models through imitation learning, drawing on the outputs generated by large foundation models (LFMs). A number of issues impact the quality of these models, ranging from limited imitation signals from shallow LFM outputs; small scale homogeneous traini...
Pre-Training Curriculum for Multi-Token Prediction in Language Models
2505.22757v1
curr_instr
\cite{curr_instr}
Instruction Tuning with Human Curriculum
http://arxiv.org/abs/2310.09518v4
In this work, we (1) introduce Curriculum Instruction Tuning, (2) explore the potential advantages of employing diverse curriculum strategies, and (3) delineate a synthetic instruction-response generation framework that complements our theoretical approach. Distinct from the existing instruction tuning dataset, our gen...
true
true
Lee, Bruce W and Cho, Hyunsoo and Yoo, Kang Min
2,024
null
https://aclanthology.org/2024.findings-naacl.82
10.18653/v1/2024.findings-naacl.82
null
Instruction Tuning with Human Curriculum
Instruction Tuning with Human Curriculum
http://arxiv.org/pdf/2310.09518v4
In this work, we (1) introduce Curriculum Instruction Tuning, (2) explore the potential advantages of employing diverse curriculum strategies, and (3) delineate a synthetic instruction-response generation framework that complements our theoretical approach. Distinct from the existing instruction tuning dataset, our gen...
Pre-Training Curriculum for Multi-Token Prediction in Language Models
2505.22757v1
feng2024
\cite{feng2024}
Maximize Your Data's Potential: Enhancing LLM Accuracy with Two-Phase Pretraining
http://arxiv.org/abs/2412.15285v1
Pretraining large language models effectively requires strategic data selection, blending and ordering. However, key details about data mixtures especially their scalability to longer token horizons and larger model sizes remain underexplored due to limited disclosure by model developers. To address this, we formalize ...
true
true
Steven Feng and Shrimai Prabhumoye and Kezhi Kong and Dan Su and Mostofa Patwary and Mohammad Shoeybi and Bryan Catanzaro
2,024
null
https://arxiv.org/abs/2412.15285
null
null
Maximize Your Data's Potential: Enhancing LLM Accuracy with Two-Phase Pretraining
Maximize Your Data's Potential: Enhancing LLM Accuracy with Two ...
https://arxiv.org/abs/2412.15285
A two-phase approach for pretraining outperforms random data ordering and natural distribution of tokens by 3.4% and 17% on average accuracies.
Pre-Training Curriculum for Multi-Token Prediction in Language Models
2505.22757v1
babylm_2023
\cite{babylm_2023}
Findings of the {B}aby{LM} Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora
null
null
true
false
Warstadt, Alex and Mueller, Aaron and Choshen, Leshem and Wilcox, Ethan and Zhuang, Chengxu and Ciro, Juan and Mosquera, Rafael and Paranjabe, Bhargavi and Williams, Adina and Linzen, Tal and Cotterell, Ryan
2,023
null
https://aclanthology.org/2023.conll-babylm.1
10.18653/v1/2023.conll-babylm.1
null
Findings of the {B}aby{LM} Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora
Findings of the BabyLM Challenge: Sample-Efficient Pretraining on ...
https://aclanthology.org/2023.conll-babylm.1/
The BabyLM Challenge findings focus on sample-efficient pretraining on developmentally plausible corpora, presented at the 27th Conference on Computational
Pre-Training Curriculum for Multi-Token Prediction in Language Models
2505.22757v1
babylm_2024
\cite{babylm_2024}
Findings of the Second BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora
http://arxiv.org/abs/2412.05149v1
The BabyLM Challenge is a community effort to close the data-efficiency gap between human and computational language learners. Participants compete to optimize language model training on a fixed language data budget of 100 million words or less. This year, we released improved text corpora, as well as a vision-and-lang...
true
true
Michael Y. Hu and Aaron Mueller and Candace Ross and Adina Williams and Tal Linzen and Chengxu Zhuang and Ryan Cotterell and Leshem Choshen and Alex Warstadt and Ethan Gotlieb Wilcox
2,024
null
https://arxiv.org/abs/2412.05149
null
null
Findings of the Second BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora
[2504.08165] Findings of the BabyLM Challenge
https://arxiv.org/abs/2504.08165
View a PDF of the paper titled Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora, by Alex Warstadt and 10 other authors From over 30 submissions, we extract concrete recommendations on how best to train data-efficient language models, and on where future efforts should ...
Pre-Training Curriculum for Multi-Token Prediction in Language Models
2505.22757v1
less_is_more
\cite{less_is_more}
Less is More: Pre-Training Cross-Lingual Small-Scale Language Models with Cognitively-Plausible Curriculum Learning Strategies
http://arxiv.org/abs/2410.22886v2
Curriculum Learning has been a popular strategy to improve the cognitive plausibility of Small-Scale Language Models (SSLMs) in the BabyLM Challenge. However, it has not led to considerable improvements over non-curriculum models. We assess whether theoretical linguistic acquisition theories can be used to specify more...
true
true
Suchir Salhan and Richard Diehl Martinez and Zébulon Goriely and Paula Buttery
2,024
null
https://arxiv.org/abs/2410.22886
null
null
Less is More: Pre-Training Cross-Lingual Small-Scale Language Models with Cognitively-Plausible Curriculum Learning Strategies
‪Suchir Salhan‬ - ‪Google Scholar‬
https://scholar.google.com/citations?user=xOo9sisAAAAJ&hl=en
Less is More: Pre-Training Cross-Lingual Small-Scale Language Models with Cognitively-Plausible Curriculum Learning Strategies. S Salhan, RD Martinez, Z Goriely
Pre-Training Curriculum for Multi-Token Prediction in Language Models
2505.22757v1
prophetnet
\cite{prophetnet}
{P}rophet{N}et: Predicting Future N-gram for Sequence-to-{S}equence{P}re-training
null
null
true
false
Qi, Weizhen and Yan, Yu and Gong, Yeyun and Liu, Dayiheng and Duan, Nan and Chen, Jiusheng and Zhang, Ruofei and Zhou, Ming
2,020
null
https://aclanthology.org/2020.findings-emnlp.217
10.18653/v1/2020.findings-emnlp.217
null
{P}rophet{N}et: Predicting Future N-gram for Sequence-to-{S}equence{P}re-training
ProphetNet: Predicting Future N-gram for Sequence-to- ...
https://arxiv.org/abs/2001.04063
by W Qi · 2020 · Cited by 542 — This paper presents a new sequence-to-sequence pre-training model called ProphetNet, which introduces a novel self-supervised objective named future n-gram
Pre-Training Curriculum for Multi-Token Prediction in Language Models
2505.22757v1
future_lens
\cite{future_lens}
Future Lens: Anticipating Subsequent Tokens from a Single Hidden State
http://arxiv.org/abs/2311.04897v1
We conjecture that hidden state vectors corresponding to individual input tokens encode information sufficient to accurately predict several tokens ahead. More concretely, in this paper we ask: Given a hidden (internal) representation of a single token at position $t$ in an input, can we reliably anticipate the tokens ...
true
true
Pal, Koyena and Sun, Jiuding and Yuan, Andrew and Wallace, Byron and Bau, David
2,023
null
https://aclanthology.org/2023.conll-1.37
10.18653/v1/2023.conll-1.37
null
Future Lens: Anticipating Subsequent Tokens from a Single Hidden State
Future Lens: Anticipating Subsequent Tokens from a Single Hidden State
http://arxiv.org/pdf/2311.04897v1
We conjecture that hidden state vectors corresponding to individual input tokens encode information sufficient to accurately predict several tokens ahead. More concretely, in this paper we ask: Given a hidden (internal) representation of a single token at position $t$ in an input, can we reliably anticipate the tokens ...
Pre-Training Curriculum for Multi-Token Prediction in Language Models
2505.22757v1
gloeckle2024mtp
\cite{gloeckle2024mtp}
Better & Faster Large Language Models via Multi-token Prediction
http://arxiv.org/abs/2404.19737v1
Large language models such as GPT and Llama are trained with a next-token prediction loss. In this work, we suggest that training language models to predict multiple future tokens at once results in higher sample efficiency. More specifically, at each position in the training corpus, we ask the model to predict the fol...
true
true
Fabian Gloeckle and Badr Youbi Idrissi and Baptiste Rozière and David Lopez-Paz and Gabriel Synnaeve
2,024
null
https://arxiv.org/abs/2404.19737
null
null
Better & Faster Large Language Models via Multi-token Prediction
Better & Faster Large Language Models via Multi-token ...
https://www.reddit.com/r/LocalLLaMA/comments/1dj9xql/better_faster_large_language_models_via/
In this work, we suggest that training language models to predict multiple future tokens at once results in higher sample efficiency.
Pre-Training Curriculum for Multi-Token Prediction in Language Models
2505.22757v1
blockwise_parallel_decoding
\cite{blockwise_parallel_decoding}
Blockwise Parallel Decoding for Deep Autoregressive Models
http://arxiv.org/abs/1811.03115v1
Deep autoregressive sequence-to-sequence models have demonstrated impressive performance across a wide variety of tasks in recent years. While common architecture classes such as recurrent, convolutional, and self-attention networks make different trade-offs between the amount of computation needed per layer and the le...
true
true
Stern, Mitchell and Shazeer, Noam and Uszkoreit, Jakob
2,018
null
https://proceedings.neurips.cc/paper_files/paper/2018/file/c4127b9194fe8562c64dc0f5bf2c93bc-Paper.pdf
null
null
Blockwise Parallel Decoding for Deep Autoregressive Models
Blockwise Parallel Decoding for Deep Autoregressive Models
http://arxiv.org/pdf/1811.03115v1
Deep autoregressive sequence-to-sequence models have demonstrated impressive performance across a wide variety of tasks in recent years. While common architecture classes such as recurrent, convolutional, and self-attention networks make different trade-offs between the amount of computation needed per layer and the le...
Pre-Training Curriculum for Multi-Token Prediction in Language Models
2505.22757v1
layerskip
\cite{layerskip}
{L}ayer{S}kip: Enabling Early Exit Inference and Self-Speculative Decoding
null
null
true
false
Elhoushi, Mostafa and Shrivastava, Akshat and Liskovich, Diana and Hosmer, Basil and Wasti, Bram and Lai, Liangzhen and Mahmoud, Anas and Acun, Bilge and Agarwal, Saurabh and Roman, Ahmed and Aly, Ahmed and Chen, Beidi and Wu, Carole-Je...
2,024
null
https://aclanthology.org/2024.acl-long.681
10.18653/v1/2024.acl-long.681
null
{L}ayer{S}kip: Enabling Early Exit Inference and Self-Speculative Decoding
Enabling Early Exit Inference and Self-Speculative Decoding
https://aclanthology.org/2024.acl-long.681/
We present LayerSkip, an end-to-end solution to speed-up inference of large language models (LLMs). First, during training we apply layer dropout.
Pre-Training Curriculum for Multi-Token Prediction in Language Models
2505.22757v1
kangaroo
\cite{kangaroo}
Kangaroo: Lossless Self-Speculative Decoding via Double Early Exiting
http://arxiv.org/abs/2404.18911v1
Speculative decoding has demonstrated its effectiveness in accelerating the inference of large language models while maintaining a consistent sampling distribution. However, the conventional approach of training a separate draft model to achieve a satisfactory token acceptance rate can be costly. Drawing inspiration fr...
true
true
Fangcheng Liu and Yehui Tang and Zhenhua Liu and Yunsheng Ni and Kai Han and Yunhe Wang
2,024
null
https://arxiv.org/abs/2404.18911
null
null
Kangaroo: Lossless Self-Speculative Decoding via Double Early Exiting
NeurIPS Poster Kangaroo: Lossless Self-Speculative Decoding for ...
https://neurips.cc/virtual/2024/poster/93829
Kangaroo: Lossless Self-Speculative Decoding for Accelerating LLMs via Double Early Exiting However, the conventional approach of training separate draft model to achieve a satisfactory token acceptance rate can be costly and impractical. In this paper, we propose a novel self-speculative decoding framework \emph{Kanga...
Pre-Training Curriculum for Multi-Token Prediction in Language Models
2505.22757v1
draft_verify
\cite{draft_verify}
Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding
http://arxiv.org/abs/2309.08168v2
We present a novel inference scheme, self-speculative decoding, for accelerating Large Language Models (LLMs) without the need for an auxiliary model. This approach is characterized by a two-stage process: drafting and verification. The drafting stage generates draft tokens at a slightly lower quality but more quickly,...
true
true
Zhang, Jun and Wang, Jue and Li, Huan and Shou, Lidan and Chen, Ke and Chen, Gang and Mehrotra, Sharad
2,024
null
https://aclanthology.org/2024.acl-long.607
10.18653/v1/2024.acl-long.607
null
Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding
Draft & Verify: Lossless Large Language Model ...
https://aclanthology.org/2024.acl-long.607/
by J Zhang · 2024 · Cited by 130 — We present a novel inference scheme, self-speculative decoding, for accelerating Large Language Models (LLMs) without the need for an auxiliary model.
Pre-Training Curriculum for Multi-Token Prediction in Language Models
2505.22757v1
swift
\cite{swift}
SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration
http://arxiv.org/abs/2410.06916v2
Speculative decoding (SD) has emerged as a widely used paradigm to accelerate LLM inference without compromising quality. It works by first employing a compact model to draft multiple tokens efficiently and then using the target LLM to verify them in parallel. While this technique has achieved notable speedups, most ex...
true
true
Heming Xia and Yongqi Li and Jun Zhang and Cunxiao Du and Wenjie Li
2,024
null
https://arxiv.org/abs/2410.06916
null
null
SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration
SWIFT: On-the-Fly Self-Speculative Decoding for LLM ...
https://github.com/hemingkx/SWIFT
SWIFT is an on-the-fly self-speculative decoding algorithm that adaptively selects intermediate layers of LLMs to skip during inference.
Pre-Training Curriculum for Multi-Token Prediction in Language Models
2505.22757v1
koala
\cite{koala}
KOALA: Enhancing Speculative Decoding for LLM via Multi-Layer Draft Heads with Adversarial Learning
http://arxiv.org/abs/2408.08146v1
Large Language Models (LLMs) exhibit high inference latency due to their autoregressive decoding nature. While the draft head in speculative decoding mitigates this issue, its full potential remains unexplored. In this paper, we introduce KOALA (K-layer Optimized Adversarial Learning Architecture), an orthogonal approa...
true
true
Kaiqi Zhang and Jing Zhao and Rui Chen
2,024
null
https://arxiv.org/abs/2408.08146
null
null
KOALA: Enhancing Speculative Decoding for LLM via Multi-Layer Draft Heads with Adversarial Learning
hemingkx/SpeculativeDecodingPapers: Must-read papers ... - GitHub
https://github.com/hemingkx/SpeculativeDecodingPapers
[pdf], 2024.08. KOALA: Enhancing Speculative Decoding for LLM via Multi-Layer Draft Heads with Adversarial Learning Kaiqi Zhang, Jing Zhao, Rui Chen. [pdf]
Pre-Training Curriculum for Multi-Token Prediction in Language Models
2505.22757v1
medusa
\cite{medusa}
Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads
http://arxiv.org/abs/2401.10774v3
Large Language Models (LLMs) employ auto-regressive decoding that requires sequential computation, with each step reliant on the previous one's output. This creates a bottleneck as each step necessitates moving the full model parameters from High-Bandwidth Memory (HBM) to the accelerator's cache. While methods such as ...
true
true
Tianle Cai and Yuhong Li and Zhengyang Geng and Hongwu Peng and Jason D. Lee and Deming Chen and Tri Dao
2,024
null
https://arxiv.org/abs/2401.10774
null
null
Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads
Medusa: Simple Framework for Accelerating LLM ...
https://github.com/FasterDecoding/Medusa
Medusa is a simple framework that democratizes the acceleration techniques for LLM generation with multiple decoding heads.
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation
2506.02853v1
lee1985determination
\cite{lee1985determination}
Determination of {3D} human body postures from a single view
null
null
true
false
Lee, Hsi-Jian and Chen, Zen
1,985
null
null
null
Computer Vision, Graphics, and Image Processing
Determination of {3D} human body postures from a single view
Determination of 3D human body postures from a single view
https://www.sciencedirect.com/science/article/abs/pii/0734189X85900945
In this paper a method is proposed to recover and interpret the 3D body structures of a person from a single view, provided that (1) at least six feature points on the head and a set of body joints are available on the image plane, and (2) the geometry of head and lengths of body segments formed by joints are known. 20...
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation
2506.02853v1
mehta2017monocular
\cite{mehta2017monocular}
Monocular {3D} human pose estimation in the wild using improved cnn supervision
null
null
true
false
Mehta, Dushyant and Rhodin, Helge and Casas, Dan and Fua, Pascal and Sotnychenko, Oleksandr and Xu, Weipeng and Theobalt, Christian
2,017
null
null
null
null
Monocular {3D} human pose estimation in the wild using improved cnn supervision
Monocular 3D Human Pose Estimation In The Wild Using Improved ...
https://arxiv.org/abs/1611.09813
Authors:Dushyant Mehta, Helge Rhodin, Dan Casas, Pascal Fua, Oleksandr Sotnychenko, Weipeng Xu, Christian Theobalt View a PDF of the paper titled Monocular 3D Human Pose Estimation In The Wild Using Improved CNN Supervision, by Dushyant Mehta and 6 other authors View a PDF of the paper titled Monocular 3D Human Pose Es...
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation
2506.02853v1
pavlakos2017coarse
\cite{pavlakos2017coarse}
Coarse-to-Fine Volumetric Prediction for Single-Image 3D Human Pose
http://arxiv.org/abs/1611.07828v2
This paper addresses the challenge of 3D human pose estimation from a single color image. Despite the general success of the end-to-end learning paradigm, top performing approaches employ a two-step solution consisting of a Convolutional Network (ConvNet) for 2D joint localization and a subsequent optimization step to ...
true
true
Pavlakos, Georgios and Zhou, Xiaowei and Derpanis, Konstantinos G and Daniilidis, Kostas
2,017
null
null
null
null
Coarse-to-Fine Volumetric Prediction for Single-Image 3D Human Pose
Coarse-to-Fine Volumetric Prediction for Single-Image 3D ...
https://arxiv.org/abs/1611.07828
Image 2: arxiv logo>cs> arXiv:1611.07828 **arXiv:1611.07828** (cs) View a PDF of the paper titled Coarse-to-Fine Volumetric Prediction for Single-Image 3D Human Pose, by Georgios Pavlakos and 3 other authors View a PDF of the paper titled Coarse-to-Fine Volumetric Prediction for Single-Image 3D Human Pose, by Georgio...
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation
2506.02853v1
cai2019exploiting
\cite{cai2019exploiting}
Exploiting spatial-temporal relationships for {3D} pose estimation via graph convolutional networks
null
null
true
false
Cai, Yujun and Ge, Liuhao and Liu, Jun and Cai, Jianfei and Cham, Tat-Jen and Yuan, Junsong and Thalmann, Nadia Magnenat
2,019
null
null
null
null
Exploiting spatial-temporal relationships for {3D} pose estimation via graph convolutional networks
vanoracai/Exploiting-Spatial-temporal-Relationships-for- ...
https://github.com/vanoracai/Exploiting-Spatial-temporal-Relationships-for-3D-Pose-Estimation-via-Graph-Convolutional-Networks
This is the code for the paper ICCV 2019 Exploiting Spatial-temporal Relationships for 3D Pose Estimation via Graph Convolutional Networks in Pytorch.
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation
2506.02853v1
martinez2017simple
\cite{martinez2017simple}
A simple yet effective baseline for 3d human pose estimation
http://arxiv.org/abs/1705.03098v2
Following the success of deep convolutional networks, state-of-the-art methods for 3d human pose estimation have focused on deep end-to-end systems that predict 3d joint locations given raw image pixels. Despite their excellent performance, it is often not easy to understand whether their remaining error stems from a l...
true
true
Martinez, Julieta and Hossain, Rayat and Romero, Javier and Little, James J
2,017
null
null
null
null
A simple yet effective baseline for 3d human pose estimation
A simple yet effective baseline for 3d human pose estimation
http://arxiv.org/pdf/1705.03098v2
Following the success of deep convolutional networks, state-of-the-art methods for 3d human pose estimation have focused on deep end-to-end systems that predict 3d joint locations given raw image pixels. Despite their excellent performance, it is often not easy to understand whether their remaining error stems from a l...
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation
2506.02853v1
zhao2019semantic
\cite{zhao2019semantic}
{Semantic Graph Convolutional Networks for 3D Human Pose Regression}
null
null
true
false
Zhao, Long and Peng, Xi and Tian, Yu and Kapadia, Mubbasir and Metaxas, Dimitris N
2,019
null
null
null
null
{Semantic Graph Convolutional Networks for 3D Human Pose Regression}
Semantic Graph Convolutional Networks for 3D Human ...
https://openaccess.thecvf.com/content_CVPR_2019/papers/Zhao_Semantic_Graph_Convolutional_Networks_for_3D_Human_Pose_Regression_CVPR_2019_paper.pdf
by L Zhao · 2019 · Cited by 714 — SemGCN is a novel network for regression tasks with graph data, capturing semantic information, and applied to 3D human pose regression.
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation
2506.02853v1
zou2021modulated
\cite{zou2021modulated}
Modulated graph convolutional network for {3D} human pose estimation
null
null
true
false
Zou, Zhiming and Tang, Wei
2,021
null
null
null
null
Modulated graph convolutional network for {3D} human pose estimation
Modulated Graph Convolutional Network for 3D Human Pose ...
https://ieeexplore.ieee.org/document/9710217/
The graph convolutional network (GCN) has recently achieved promising performance of 3D human pose estimation (HPE) by modeling the relationship among body
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation
2506.02853v1
zhao2022graformer
\cite{zhao2022graformer}
{GraFormer: Graph-oriented Transformer for {3D} Pose Estimation}
null
null
true
false
Zhao, Weixi and Wang, Weiqiang and Tian, Yunjie
2,022
null
null
null
null
{GraFormer: Graph-oriented Transformer for {3D} Pose Estimation}
[PDF] GraFormer: Graph-Oriented Transformer for 3D Pose Estimation
https://openaccess.thecvf.com/content/CVPR2022/papers/Zhao_GraFormer_Graph-Oriented_Transformer_for_3D_Pose_Estimation_CVPR_2022_paper.pdf
In this paper, we use a new transformer architecture by embedding graph convolution operations to improve the. 3D pose estimation. 3. Method. As shown in Figure
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation
2506.02853v1
ZhongTMM2024
\cite{ZhongTMM2024}
{Frame-Padded Multiscale Transformer for Monocular {3D} Human Pose Estimation}
null
null
true
false
Zhong, Yuanhong and Yang, Guangxia and Zhong, Daidi and Yang, Xun and Wang, Shanshan
2,024
null
null
10.1109/TMM.2023.3347095
IEEE Transactions on Multimedia
{Frame-Padded Multiscale Transformer for Monocular {3D} Human Pose Estimation}
Frame-Padded Multiscale Transformer for Monocular 3D Human ...
https://dl.acm.org/doi/10.1109/TMM.2023.3347095
Abstract. Monocular 3D human pose estimation is an ill-posed problem in computer vision due to its depth ambiguity. Most existing works supplement the depth
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation
2506.02853v1
WangTMM2024
\cite{WangTMM2024}
{Exploiting Temporal Correlations for {3D} Human Pose Estimation}
null
null
true
false
Wang, Ruibin and Ying, Xianghua and Xing, Bowei
2,024
null
null
10.1109/TMM.2023.3323874
IEEE Transactions on Multimedia
{Exploiting Temporal Correlations for {3D} Human Pose Estimation}
Exploiting Temporal Correlations for 3D Human Pose ...
http://ieeexplore.ieee.org/document/10278485/
Exploiting the rich temporal information in human pose sequences to facilitate 3D pose estimation has garnered particular attention.
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation
2506.02853v1
tang20233d
\cite{tang20233d}
{3D} human pose estimation with spatio-temporal criss-cross attention
null
null
true
false
Tang, Zhenhua and Qiu, Zhaofan and Hao, Yanbin and Hong, Richang and Yao, Ting
2,023
null
null
null
null
{3D} human pose estimation with spatio-temporal criss-cross attention
zhenhuat/STCFormer: (CVPR2023)3D Human Pose ...
https://github.com/zhenhuat/STCFormer
This is the readme file for the code release of 3D Human Pose Estimation with Spatio-Temporal Criss-cross Attention on PyTorch platform.
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation
2506.02853v1
li2022mhformer
\cite{li2022mhformer}
MHFormer: Multi-Hypothesis Transformer for 3D Human Pose Estimation
http://arxiv.org/abs/2111.12707v4
Estimating 3D human poses from monocular videos is a challenging task due to depth ambiguity and self-occlusion. Most existing works attempt to solve both issues by exploiting spatial and temporal relationships. However, those works ignore the fact that it is an inverse problem where multiple feasible solutions (i.e., ...
true
true
Li, Wenhao and Liu, Hong and Tang, Hao and Wang, Pichao and Van Gool, Luc
2,022
null
null
null
null
MHFormer: Multi-Hypothesis Transformer for 3D Human Pose Estimation
Multi-Hypothesis Transformer for 3D Human Pose Estimation - arXiv
https://arxiv.org/abs/2111.12707
We propose a Multi-Hypothesis Transformer (MHFormer) that learns spatio-temporal representations of multiple plausible pose hypotheses.
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation
2506.02853v1
liu2023posynda
\cite{liu2023posynda}
PoSynDA: Multi-Hypothesis Pose Synthesis Domain Adaptation for Robust 3D Human Pose Estimation
http://arxiv.org/abs/2308.09678v2
Existing 3D human pose estimators face challenges in adapting to new datasets due to the lack of 2D-3D pose pairs in training sets. To overcome this issue, we propose \textit{Multi-Hypothesis \textbf{P}ose \textbf{Syn}thesis \textbf{D}omain \textbf{A}daptation} (\textbf{PoSynDA}) framework to bridge this data disparity...
true
true
Liu, Hanbing and He, Jun-Yan and Cheng, Zhi-Qi and Xiang, Wangmeng and Yang, Qize and Chai, Wenhao and Wang, Gaoang and Bao, Xu and Luo, Bin and Geng, Yifeng and others
2,023
null
null
null
null
PoSynDA: Multi-Hypothesis Pose Synthesis Domain Adaptation for Robust 3D Human Pose Estimation
PoSynDA: Multi-Hypothesis Pose Synthesis Domain ...
https://github.com/hbing-l/PoSynDA
PoSynDA is a novel framework for 3D Human Pose Estimation (3D HPE) that addresses the challenges of adapting to new datasets due to the scarcity of 2D-3D
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation
2506.02853v1
chen2023hdformer
\cite{chen2023hdformer}
HDFormer: High-order Directed Transformer for 3D Human Pose Estimation
http://arxiv.org/abs/2302.01825v2
Human pose estimation is a challenging task due to its structured data sequence nature. Existing methods primarily focus on pair-wise interaction of body joints, which is insufficient for scenarios involving overlapping joints and rapidly changing poses. To overcome these issues, we introduce a novel approach, the High...
true
true
Chen, Hanyuan and He, Jun-Yan and Xiang, Wangmeng and Cheng, Zhi-Qi and Liu, Wei and Liu, Hanbing and Luo, Bin and Geng, Yifeng and Xie, Xuansong
2,023
null
null
null
null
HDFormer: High-order Directed Transformer for 3D Human Pose Estimation
High-order Directed Transformer for 3D Human Pose Estimation
https://arxiv.org/abs/2302.01825
HDFormer is a novel approach for 3D human pose estimation using high-order bone and joint relationships, addressing issues with overlapping
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation
2506.02853v1
hu2021conditional
\cite{hu2021conditional}
Conditional Directed Graph Convolution for 3D Human Pose Estimation
http://arxiv.org/abs/2107.07797v2
Graph convolutional networks have significantly improved 3D human pose estimation by representing the human skeleton as an undirected graph. However, this representation fails to reflect the articulated characteristic of human skeletons as the hierarchical orders among the joints are not explicitly presented. In this p...
true
true
Hu, Wenbo and Zhang, Changgong and Zhan, Fangneng and Zhang, Lei and Wong, Tien-Tsin
2,021
null
null
null
null
Conditional Directed Graph Convolution for 3D Human Pose Estimation
Conditional Directed Graph Convolution for 3D Human Pose Estimation
http://arxiv.org/pdf/2107.07797v2
Graph convolutional networks have significantly improved 3D human pose estimation by representing the human skeleton as an undirected graph. However, this representation fails to reflect the articulated characteristic of human skeletons as the hierarchical orders among the joints are not explicitly presented. In this p...
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation
2506.02853v1
ci2019optimizing
\cite{ci2019optimizing}
Optimizing network structure for {3D} human pose estimation
null
null
true
false
Ci, Hai and Wang, Chunyu and Ma, Xiaoxuan and Wang, Yizhou
2,019
null
null
null
null
Optimizing network structure for {3D} human pose estimation
Optimizing Network Structure for 3D Human Pose Estimation
https://openaccess.thecvf.com/content_ICCV_2019/papers/Ci_Optimizing_Network_Structure_for_3D_Human_Pose_Estimation_ICCV_2019_paper.pdf
by H Ci · 2019 · Cited by 312 — A 3D human pose is naturally represented by a skele- tal graph parameterized by the 3D locations of the body joints such as elbows and knees. See Figure 1. When
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation
2506.02853v1
liu2020comprehensive
\cite{liu2020comprehensive}
A comprehensive study of weight sharing in graph networks for {3D} human pose estimation
null
null
true
false
Liu, Kenkun and Ding, Rongqi and Zou, Zhiming and Wang, Le and Tang, Wei
2,020
null
null
null
null
A comprehensive study of weight sharing in graph networks for {3D} human pose estimation
A Comprehensive Study of Weight Sharing in Graph ...
https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123550324.pdf
by K Liu · Cited by 182 — Graph convolutional networks (GCNs) have been applied to. 3D human pose estimation (HPE) from 2D body joint detections and have shown encouraging performance.
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation
2506.02853v1
wang2018non
\cite{wang2018non}
Non-local Neural Networks
http://arxiv.org/abs/1711.07971v3
Both convolutional and recurrent operations are building blocks that process one local neighborhood at a time. In this paper, we present non-local operations as a generic family of building blocks for capturing long-range dependencies. Inspired by the classical non-local means method in computer vision, our non-local o...
true
true
Wang, Xiaolong and Girshick, Ross and Gupta, Abhinav and He, Kaiming
2,018
null
null
null
null
Non-local Neural Networks
[PDF] Non-Local Neural Networks - CVF Open Access
https://openaccess.thecvf.com/content_cvpr_2018/papers/Wang_Non-Local_Neural_Networks_CVPR_2018_paper.pdf
Non-local operations capture long-range dependencies by computing a weighted sum of features at all positions, unlike local operations. They are efficient and
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation
2506.02853v1
gong2023diffpose
\cite{gong2023diffpose}
DiffPose: Toward More Reliable 3D Pose Estimation
http://arxiv.org/abs/2211.16940v3
Monocular 3D human pose estimation is quite challenging due to the inherent ambiguity and occlusion, which often lead to high uncertainty and indeterminacy. On the other hand, diffusion models have recently emerged as an effective tool for generating high-quality images from noise. Inspired by their capability, we expl...
true
true
Gong, Jia and Foo, Lin Geng and Fan, Zhipeng and Ke, Qiuhong and Rahmani, Hossein and Liu, Jun
2,023
null
null
null
null
DiffPose: Toward More Reliable 3D Pose Estimation
DiffPose: Toward More Reliable 3D Pose Estimation
http://arxiv.org/pdf/2211.16940v3
Monocular 3D human pose estimation is quite challenging due to the inherent ambiguity and occlusion, which often lead to high uncertainty and indeterminacy. On the other hand, diffusion models have recently emerged as an effective tool for generating high-quality images from noise. Inspired by their capability, we expl...
Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation
2506.02853v1
holmquist2023diffpose
\cite{holmquist2023diffpose}
DiffPose: Multi-hypothesis Human Pose Estimation using Diffusion models
http://arxiv.org/abs/2211.16487v1
Traditionally, monocular 3D human pose estimation employs a machine learning model to predict the most likely 3D pose for a given input image. However, a single image can be highly ambiguous and induces multiple plausible solutions for the 2D-3D lifting step which results in overly confident 3D pose predictors. To this...
true
true
Holmquist, Karl and Wandt, Bastian
2,023
null
null
null
null
DiffPose: Multi-hypothesis Human Pose Estimation using Diffusion models
Multi-hypothesis Human Pose Estimation using Diffusion models
https://arxiv.org/abs/2211.16487
We propose \emph{DiffPose}, a conditional diffusion model, that predicts multiple hypotheses for a given input image.