Datasets:

Modalities:
Tabular
Text
Formats:
csv
Size:
< 1K
ArXiv:
License:
File size: 5,502 Bytes
7d5c8be
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
id	excerpt	target_paper_titles	source_paper_title	source_paper_url	year	split
1	There has yet to be a widely-adopted standard to understand ML interpretability, though there have been works proposing frameworks for interpretability[CITATION]	Towards A Rigorous Science of Interpretable Machine Learning[TITLE_SEPARATOR]Designing Theory-Driven User-Centric Explainable AI[TITLE_SEPARATOR]Unmasking Clever Hans predictors and assessing what machines really learn	A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI	https://arxiv.org/abs/1907.07374	2019	test
2	DBNs [CITATION] are essentially SAEs where the AE layers are replaced by RBMs.	Greedy Layer-Wise Training of Deep Networks[TITLE_SEPARATOR]A Fast Learning Algorithm for Deep Belief Nets	A survey on deep learning in medical image analysis	https://arxiv.org/abs/1702.05747	2017	test
3	NLMs [CITATION] characterize the probability of word sequences by neural networks, e.g., multi-layer perceptron (MLP) and recurrent neural networks (RNNs).	A neural probabilistic language model[TITLE_SEPARATOR]Recurrent neural network based language model[TITLE_SEPARATOR]Roberta: A robustly optimized BERT pretraining approach	A Survey of Large Language Models	https://arxiv.org/abs/2303.18223	2017	test
4	 In order to shorten the training time and to speed up traditional SDA algorithms, Chen et al. proposed a modified version of SDA, i.e., Marginalized Stacked Linear Denoising Autoencoder (mSLDA) [CITATION].	Marginalized Denoising Autoencoders for Domain Adaptation[TITLE_SEPARATOR]Marginalizing stacked linear denoising autoencoders	A Comprehensive Survey on Transfer Learning	https://arxiv.org/abs/1911.02685	2019	test
5	Recent research has introduced prominent embedding models such as AngIE, Voyage, BGE,etc [CITATION], which are benefit from multi-task instruct tuning.	AnglE-optimized Text Embeddings[TITLE_SEPARATOR]Flagembedding[TITLE_SEPARATOR]Voyage’s embedding models	Retrieval-Augmented Generation for Large Language Models: A Survey	https://arxiv.org/abs/2312.10997	2024	test
6	 Using strong LLMs (usually closed-source ones, e.g., GPT-4, Claude, ChatGPT) as an automated proxy for assessing LLMs has become a natural choice [218], as shown in Figure 2. With appropriate prompt design, the quality of evaluation and agreement to human judgment can be promising [CITATION]. 	AlpacaFarm: A Simulation Framework for Methods that Learn from Human Feedback[TITLE_SEPARATOR]Large Language Models are not Fair Evaluators.[TITLE_SEPARATOR]Wider and deeper llm networks are fairer llm evaluators.[TITLE_SEPARATOR]Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena	A Survey on LLM-as-a-Judge	https://arxiv.org/abs/2411.15594	2025	test
7	Reinforcement Learning from Human Feedback (RLHF) [CITATION] is another crucial aspect of LLMs. This technique involves fine-tuning the model using human-generated responses as rewards, allowing the model to learn from its mistakes and improve its performance over time.	Fine-Tuning Language Models from Human Preferences[TITLE_SEPARATOR]Deep Reinforcement Learning from Human Preferences	A Survey on Evaluation of Large Language Models	https://arxiv.org/abs/2307.03109	2023	test
8	 In knowledge distillation, a small student model is generally supervised by a large teacher model [CITATION]. The main idea is that the student model mimics the teacher model in order to obtain a competitive or even a superior performance.	Model compression[TITLE_SEPARATOR]Do deep nets really need to be deep?[TITLE_SEPARATOR]Distilling the knowledge in a neural network[TITLE_SEPARATOR]Do deep convolutional nets really need to be deep and convolutional? 	A Survey on Evaluation of Large Language Models	https://arxiv.org/abs/2307.03109	2023	test
9	A denoising diffusion probabilistic model (DDPM) [CITATION] makes use of two Markov chains: a forward chain that perturbs data to noise, and a reverse chain that converts noise back to data. The former is typically hand-designed with the goal to transform any data distribution into a simple prior distribution (e.g., standard Gaussian), while the latter Markov chain reverses the former by learning transition kernels parameterized by deep neural networks.	Denoising diffusion probabilistic models[TITLE_SEPARATOR]Deep unsupervised learning using nonequilibrium thermodynamics	Diffusion Models: A Comprehensive Survey of Methods and Applications	https://arxiv.org/abs/2209.00796	2025	test
10	Neural network meta-learning has a long history [8], [17], [18]. However, its potential as a driver to advance the frontier of the contemporary deep learning industry has led to an explosion of recent research. In particular meta-learning has the potential to alleviate many of the main criticisms of contemporary deep learning [4], for instance by providing better data efficiency, exploitation of prior knowledge transfer, and enabling unsupervised and self-directed learning. Successful applications have been demonstrated in areas spanning few-shot image recognition [19], [20], unsupervised learning [21], data efficient [22], [23] and self-directed [24] reinforcement learning (RL), hyper-parameter optimization [25], and neural architecture search (NAS) [CITATION]	DARTS: Differentiable Architecture Search[TITLE_SEPARATOR]Neural Architecture Search With Rein-forcement Learning[TITLE_SEPARATOR]Regularized Evolution For Image Classifier Architecture Search	Meta-Learning in Neural Networks: A Survey	https://arxiv.org/abs/2004.05439	2020	test