uuid string | question string | answer_format string | anchor_pdf list | reference_pdf list | conference list |
|---|---|---|---|---|---|
8531101d-a0b8-50fa-9f0e-5a3c71417b4e | How to calculate three important parameters that appear in the second part of Figure 2? | Your answer should be a Python list of three string elements, every element is a formula in latex format to calculate a parameter. | [
"Dual-Gated Fusion with Prefix-Tuning for Multi-Modal Relation Extraction"
] | [] | [] |
85f81535-fc10-575c-995f-4739a4470d4b | Where can I get the code of GreenKGC method? | Your answer should be the url of the code of GreenKGC method. | [
"GreenKGC: A Lightweight Knowledge Graph Completion Method"
] | [] | [] |
8781817f-474b-5c15-9f77-aa85055446b9 | The study employs two main methods to analyze linguistic features: LIWC and BERT. What are the advantages of the two methodes respectively? | Your answer should be a python list of two strings.every string describes clearly the advantages of one methode. | [
"Tracing Linguistic Markers of Influence in a Large Online Organisation"
] | [] | [] |
87d2ad0c-a0ed-5e69-9007-571e601a142a | What is the collection process of the latest mainstream KBQA dataset used in the paper "Augmenting Reasoning Capabilities of LLMs with Graph Structures in Knowledge Base Question Answering"? | Your answer should be a python strings. | [
"Augmenting Reasoning Capabilities of LLMs with Graph Structures in Knowledge Base Question Answering"
] | [
"Beyond I.I.D.: Three Levels of Generalization for Question Answering on Knowledge Bases"
] | [] |
881d40cb-62f6-57d9-b4bb-75ce6a1c2b89 | Which paper studies the concept of enhancing the coverage of a selective prediction system by re-attempting the questions on which it was not sufficiently confident. | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Post-Abstention: Towards Reliably Re-Attempting the Abstained Instances in QA"
] | [
"acl2023"
] |
88fc505f-d156-52bc-8084-2ab7bbc5fb17 | To generate NSFW images, which technique is used in the paper titled "Erasing Undesirable Concepts in Diffusion Models with Adversarial Preservation"?Please tell me the full name, not the abbreviation. | Your answer should be a single string of the full name of the technique. | [
"Erasing Undesirable Concepts in Diffusion Models with Adversarial Preservation"
] | [
"Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models"
] | [] |
891093a5-a10b-5299-a421-a77713a8886e | According to Table 1, which baselines this paper used has the highest average score? In which paper is this method proposed? And in which conference was this paper published? | Your answer should be a python list with three items, the first item is the name of baseline reaching the highest average score according to Table 1, the second item is the name of paper where this method proposed, and the third item is the abbreviation of the conference name where this paper was published. | [
"ContraCLM: Contrastive Learning For Causal Language Model"
] | [] | [] |
895610a0-b3ae-5623-a8e2-e0731eae53f6 | Is there a paper that uses Explainable AI techniques to investigate how language models represent the expression of morality? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"What does a Text Classifier Learn about Morality? An Explainable Method for Cross-Domain Comparison of Moral Rhetoric"
] | [
"acl2023"
] |
896a8ab2-38ab-5ae9-95cc-2d883784e87e | In the main experiment of "AN LLM CAN FOOL ITSELF: A PROMPT-BASED ADVERSARIAL ATTACK", what high-level tasks do the five tasks used belong to, as categorized in the original paper? | Your answer should be a python dictionary with the following format: {'task1': 'task1_category', 'task2': 'task2_category', 'task3': 'task3_category', 'task4': 'task4_category', 'task5': 'task5_category'}. YOU MUST USE THE EXACT NAMES OF THE CATEGORIES IN THE PAPER AND THE ABBREVIATION OF THE TASKS. | [
"An LLM can Fool Itself: A Prompt-Based Adversarial Attack"
] | [
"GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding"
] | [] |
897a955d-9e8a-5836-a3b6-0ed48575f2b9 | In terms of evaluation results on SuperGLUE using RoBERTaBASE, on subtask ReCoRD, which model(s) achieve the overall best results? Additionally, which model(s) perform the best among the two-stage MTL models? | Your answer should be a python dict(without \n) containing two keys "overall best" and "two-stage MTL best", each value of the two keys is a python string list. e.g.{"overall best":["modelname1"],"two-stage MTL best":["modelname2","modelname3"]} | [
"ScaLearn: Simple and Highly Parameter-Efficient Task Transfer by Learning to Scale"
] | [] | [] |
8a08dc1f-6e2c-5e4b-9da4-34f5fb3ee073 | Which paper contains quantitative results demonstrating taking VQ tokens as inputs is inferior to pixel images for dense recognition tasks? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"ADDP: Learning General Representations for Image Recognition and Generation with Alternating Denoising Diffusion Process"
] | [
"iclr2024"
] |
8a732ed7-85bc-5355-aec7-97434295b153 | For the PLM with the lowest number of synset candidates under synset retrieval in the paper "Predicate Sense Disambiguation for UMR Annotation of Latin: Challenges and Insights", what negative impact will it have if it randomly blocks a certain percentage of input tokens during pre-training? | Your answer should be a python strings. | [
"Predicate Sense Disambiguation for UMR Annotation of Latin: Challenges and Insights"
] | [
"BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"
] | [] |
8aca533a-c03d-5708-aaf4-320886de4a20 | In the paper that introduced the latest dataset used by RetinaQA, what innovation related to F1 was also applied in the evaluation of RetinaQA? | Your answer should be a paragraph, describing the innovation on F1. | [
"RetinaQA: A Robust Knowledge Base Question Answering Model for both Answerable and Unanswerable Questions"
] | [
"Do I have the Knowledge to Answer? Investigating Answerability of Knowledge Base Questions",
"Beyond I.I.D.: Three Levels of Generalization for Question Answering on Knowledge Bases",
"The Value of Semantic Parse Labeling for Knowledge Base Question Answering"
] | [] |
8ad76d2f-9b95-58ed-b20f-7c15fc30c2fc | To enhance scalability and effectiveness, which Ensemble Model does this paper("MINERS : Multilingual Language Models as Semantic Retrievers") choose? In the source paper, which datasets is this framework tested on? | Your answer should be a python list like ["string1", ["string2", "string3", ...]]. The first element should be a string, representing the name of the Ensemble Model. The second element should be a list of strings, representing the names of the datasets tested on.For these names, abbrievation is enough. | [
"MINERS: Multilingual Language Models as Semantic Retrievers"
] | [
"Efficient Zero-Shot Cross-lingual Inference via Retrieval"
] | [] |
8b394043-af79-5a1b-9c7f-a3437276f8af | In the framework used in the paper "ADAPTIVE DEEP SPIKING NEURAL NETWORK WITH GLOBAL-LOCAL LEARNING VIA BALANCED EXCITATORY AND INHIBITORY MECHANISM", what are the ANN2SNN conversion functions? | Your answer should be a python list of strings, e.g., ['function1', 'function2']. YOU MUST USE THE EXACT FUNCTION NAMES AS IN THE PAPER. | [
"Adaptive deep spiking neural network with global-local learning via balanced excitatory and inhibitory mechanism"
] | [
"SpikingJelly: An open-source machine learning infrastructure platform for spike-based intelligence"
] | [] |
8b6057cc-77e5-5981-9d3f-5b9b62126d0e | What work proposes a model to learn a latent regular cell complex from data? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"From Latent Graph to Latent Topology Inference: Differentiable Cell Complex Module"
] | [
"iclr2024"
] |
8b79fc88-0307-532a-bbdb-e8990fb27372 | What paper considers sensitive data issue when prompting large language model APIs? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"796a2f4b-0702-52cb-8e05-241c378b828f"
] | [
"iclr2024"
] |
8ba9edfd-5a7a-5ec1-9d22-329443311f3b | Is there any paper that aligns speech and text embeddings better than CTC training? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"WACO: Word-Aligned Contrastive Learning for Speech Translation"
] | [
"acl2023"
] |
8beb15f1-2c64-5aa7-a1a3-1579452b2ecc | For the specific dataset where CLiCoTEA does not outperform all models, in which languages does this occur? | Your answer should be a Python list of string elements, every element is the abbreviation of a langugage mentioned in the paper, e.g. ["AR", "ES", "FR", ...]. | [
"Stop Pre-Training: Adapt Visual-Language Models to Unseen Languages"
] | [] | [] |
8cc38e05-20e5-5a69-8b82-ecc09c03450a | According to the experiment result, How much better does the GPT-2 model perform on Task A compared to the CNN-BiLSTM model in terms of F1-score? | Your answer should be a single python float, rounded to 2 decimal places. | [
"CNLP-NITS at SemEval-2023 Task 10: Online sexism prediction, PREDHATE!"
] | [] | [] |
8d540aba-a5d8-5da7-8c8d-c98a1dcbd507 | The anchor paper mentioned a service which uses the methodology proposed in this paper. What is the name of the service? On which page of the paper can I find graphic information on this service? | Your answer should be a python list with 2 elements, the first element being the name of the service, and the second element being the page number. The first element should be a string and the second element should be an integer. Use the exact name of the service from the paper without changing CAPITALIZATION. | [
"Expertise-Centric Prompting Framework for Financial Tabular Data Generation using Pre-trained Large Language Models"
] | [] | [] |
8d7e5c06-78b8-5454-849a-57140efaa80c | According to the paper, which dataset also use a retrieval-based system for relevant files selecting?In that dataset, how many lines do the codebase contain on average? | Your answer should be a python list of 2 elements, the first is the name of the dataset, and the second is the number of lines in thousands, rounding to the nearest integer. e.g. ["MMMU", 9] | [
"kGym: A Platform and Dataset to Benchmark Large Language Models on Linux Kernel Crash Resolution"
] | [
"SWE-bench: Can Language Models Resolve Real-world Github Issues?"
] | [] |
8e1ebc95-7523-5a09-b95c-85748f5825ae | Which paper is among the earliest to train on extensive collection of signing video and subtitle pairs available from online platforms? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Gloss-Free End-to-End Sign Language Translation"
] | [
"acl2023"
] |
8e2d5903-f8ba-5504-aa9a-43538a8536a6 | In the paper that proposed the dataset used by ReFIR for the evaluation of the second experimental setup, which image restoration method was also proposed? Additionally, what metric applied to evaluate that method was not applied in ReFIR? | Your answer should be a Python list of 2 strings, the name of the method and the name of the metric. | [
"ReFIR: Grounding Large Restoration Models with Retrieval Augmentation"
] | [
"Scaling Up to Excellence: Practicing Model Scaling for Photo-Realistic Image Restoration In the Wild"
] | [] |
8e6f688d-d3d9-5055-89c4-b50ad752b208 | In the experiment result of the paper named "EIT: Enhanced Interactive Transformer", which model gets the highest RG-L on the summarization task? For the source paper of this model, what issue about Neural network-based methods for abstractive summarization does this paper want to address? | Your answer should be a single python list, the first element is the string of the model name, the second element is the string of the issue.e.g.["EIT","Neural network-based methods for abstractive summarization often neglect..."]. | [
"EIT: Enhanced Interactive Transformer"
] | [
"Bottom-Up Abstractive Summarization"
] | [] |
8f53d4bf-6a6f-59b8-90ed-c6b555240a59 | Among the datasets proposed in the Introduction section of the paper "Towards Robust Temporal Reasoning of Large Language Models via a Multi-Hop QA Dataset and Pseudo-Instruction Tuning", which one has the least Q-A pairs? | Your answer should be a single string, the name of the dataset as given in the Introduction section. | [
"Towards Robust Temporal Reasoning of Large Language Models via a Multi-Hop QA Dataset and Pseudo-Instruction Tuning"
] | [
"Towards Benchmarking and Improving the Temporal Reasoning Capability of Large Language Models",
"A Dataset for Answering Time-Sensitive Questions",
"StreamingQA: A Benchmark for Adaptation to New Knowledge over Time in Question Answering Models",
"SituatedQA: Incorporating Extra-Linguistic Contexts into QA",
"Time-Aware Language Models as Temporal Knowledge Bases"
] | [] |
8f561f35-51ae-5330-97aa-f547f89f4d26 | According to MSAD paper's review, which is the largest VAD benchmark datasets with multiple domain before? Additionally, in that dataset, which are the two scenarios with the most videos? | Your answer should be a Python list of 2 elements, the first is the name of the dataset, the second is a list of 2 strings, the full name of the scenarios. | [
"Advancing Video Anomaly Detection: A Concise Review and a New Dataset"
] | [
"Uncovering What, Why and How: A Comprehensive Benchmark for Causation Understanding of Video Anomaly",
"Not only Look, but also Listen: Learning Multimodal Violence Detection under Weak Supervision"
] | [] |
8f5ae2e9-9450-57e9-bc4f-069747845fdb | How were the data samples selected for the dataset used in the SFT of the paper "FACTALIGN: Long-form Factuality Alignment of Large Language Models"? | Your answer should be a python strings. | [
"FactAlign: Long-form Factuality Alignment of Large Language Models"
] | [
"What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning"
] | [] |
8f94a406-ef97-584a-a578-17a8b0380287 | Which paper first derived online occupany estimation technique to get sqrt(T) bound for reinforcement learning in adversarial linear MDP? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Towards Optimal Regret in Adversarial Linear MDPs with Bandit Feedback"
] | [
"iclr2024"
] |
8fb6f8ec-fae3-5823-ba3f-21cdca6952a9 | How do the authors split the dataset for the experiments? | Your answer should be a plein text. | [
"SIMMC-VR: A Task-oriented Multimodal Dialog Dataset with Situated and Immersive VR Streams"
] | [] | [] |
90082d94-579c-5dc2-a5c3-f9f6278857e2 | On how many datasets was COSA evaluated for its performance in Object Discovery and Composition? | Your answer should be an integer. | [
"Grounded Object-Centric Learning"
] | [] | [] |
90aa9ace-0a80-5867-893b-e342497160a1 | Is there any paper that tries to investigate LLMs' capabilities in solving elliptical constructions by using a test-dataset based on the psycolinguistic notion of Thematic Fit? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"We Understand Elliptical Sentences, and Language Models should Too: A New Dataset for Studying Ellipsis and its Interaction with Thematic Fit"
] | [
"acl2023"
] |
921a63d8-1cb7-5162-bc06-b9546498e519 | Among StudentEval, HumanEval and MBPP, which one has the most test cases per problem? | Your answer should be a string, indicating the name of the dataset. | [
"StudentEval: A Benchmark of Student-Written Prompts for Large Language Models of Code",
"Evaluating Large Language Models Trained on Code",
"Program Synthesis with Large Language Models"
] | [] | [] |
924003fa-668e-5dbc-8e2c-43aa69d5696c | Both two papers, "Language and Task Arithmetic with Parameter-Efficient Layers for Zero-Shot Summarization" and "Composing Parameter-Efficient Modules with Arithmetic Operation" focus on arithmetics on parameter-efficient modules. Which setting did the first paper mainly focus on, while the second paper didn't? | Your answer should be brief text on the setting mainly focused on only by the first paper. | [
"Language and Task Arithmetic with Parameter-Efficient Layers for Zero-Shot Summarization",
"Composing Parameter-Efficient Modules with Arithmetic Operation"
] | [] | [] |
926c7917-2d65-5ca2-9e3b-2b7927962fbd | Between the two agent-based method that are explicitly introduced in Related Work section of LLM-DP, which one is not applied as a baseline? Why not? | Your answer should be a Python list of two strings, the first is the name of the method, the second is the reason why it's not applied. | [
"Dynamic Planning with a LLM"
] | [
"Voyager: An Open-Ended Embodied Agent with Large Language Models",
"ReAct: Synergizing Reasoning and Acting in Language Models"
] | [] |
927ff9af-42f7-5216-a6f9-f106e8ff6759 | On the HEML sentence level with AUC metric, which baseline outperforms MIND on specific conditons?Is it the best variant according to the paper that proposed that baseline? | Your answer should be a Python list of two strings. The first string is the name of the baseline (with variant) that outperforms MIND, as proposed in the anchor PDF. The second string is either `true` or `false`. | [
"Unsupervised Real-Time Hallucination Detection based on the Internal States of Large Language Models"
] | [
"Language Models (Mostly) Know What They Know",
"Uncertainty Estimation in Autoregressive Structured Prediction",
"SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models",
"The Internal State of an LLM Knows When It’s Lying",
"Enhancing Uncertainty-Based Hallucination Detection with Stronger Focus",
"HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models"
] | [] |
92c53685-6c5d-538a-9c62-887598de3301 | Is there a paper that utilizes the characteristics of human evolutionary knowledge to guide language models in generating scientific ideas? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Exploring and Verbalizing Academic Ideas by Concept Co-occurrence"
] | [
"acl2023"
] |
92dcdc85-f07b-5980-80c0-474447201940 | Are there any large-scale and open-source text simplification datasets dealing with long passages? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"SWiPE: A Dataset for Document-Level Simplification of Wikipedia Pages"
] | [
"acl2023"
] |
93193629-3db3-5f41-93da-8282895eba7f | What is the relationship between the two papers in terms of datasets? | Your answer should be a python strings. | [
"Perturbed Masking: Parameter-free Probing for Analyzing and Interpreting BERT",
"What Does Parameter-free Probing Really Uncover?"
] | [] | [] |
93272751-e57b-55e7-a89a-ef2387c4d2be | Is there any paper that studies a teacher AI inferring mental states of a student role in a role-playing game setup using reinforcement learning? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"I Cast Detect Thoughts: Learning to Converse and Guide with Intents and Theory-of-Mind in Dungeons and Dragons"
] | [
"acl2023"
] |
932ee901-050a-5ef5-bb1e-0baeff576249 | According to Table 1, on how many mathematical task test sets was Rho-1 tested? | Your answer should be an integer. | [
"Not All Tokens Are What You Need for Pretraining"
] | [] | [] |
9373ac34-dd22-52b3-80da-16d94b2bcff7 | According to the paper "Full-Atom Peptide Design with Geometric Latent Diffusion", which model has the lowest success rate on PepBDB? In the paper that proposes the model, how is the overview of the proposed method given in form? | Your answer should be a Python list of 2 elements, the first is the name of the model, the second is a Python list of formula in LaTeX format. e.g. ["method", ["formula_1", ..., "formula_n"]] | [
"Full-Atom Peptide Design with Geometric Latent Diffusion"
] | [
"EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks"
] | [] |
9388f82d-b44e-52a4-8e0e-4b0e93bd5876 | Which paper proposes the two-stage training method, i.e., task-specific fine-tuning and cross-domain pre-training, to train an open-domain dialogue evaluator using the self-collected dataset. | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"RADE: Reference-Assisted Dialogue Evaluation for Open-Domain Dialogue"
] | [
"acl2023"
] |
93cbeec6-18b1-55ba-93f3-8be583414ea9 | Which paper about parameter-efficient finetuning first proposes to feed the pretrained weight instead of the activation to an adapter? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Parameter-Efficient Fine-Tuning without Introducing New Latency"
] | [
"acl2023"
] |
943cd5d2-df6b-588b-b985-70b2aa2e9f3b | How does the LISA algorithm choose which layers' parameters to freeze at each iteration? | Your answer should be a python string | [
"LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning"
] | [] | [] |
94424df2-3c4e-5f0d-b665-1ce0b2c0da54 | Among the baseline TTE approaches for the code-based task on Celiac Disease, which method achieves the best performance, excluding the new method proposed in this paper? | Your answer should be a python strings about the approach name. YOU MUST USE THE EXACT NAME FROM THE PAPER. | [
"MOTOR: A Time-to-Event Foundation Model For Structured Medical Records"
] | [] | [] |
948c99f1-12ba-5c05-8cbf-de25811480b1 | Is there a dialogue dataset where a speaker's utterance is grounded in their persona, consisting of image-text pairs representing their episodic memories? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"MPCHAT: Towards Multimodal Persona-Grounded Conversation"
] | [
"acl2023"
] |
94bf4901-caa8-50d8-9626-f34f0226b4e9 | Is there research that investigates embedding multi-bit data into watermarks to improve resilience to text corruption, particularly aimed at safeguarding keywords and syntactic elements from modification? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Robust Multi-bit Natural Language Watermarking through Invariant Features"
] | [
"acl2023"
] |
94ef0706-a58e-556e-b4c4-0dfe3f47ff63 | Is there any work that allows large numbers of model outputs to be encoded and compared by causal language models in a single forward pass? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"EEL: Efficiently Encoding Lattices for Reranking"
] | [
"acl2023"
] |
94ff4210-290d-53d9-90f7-56c5df1bed85 | What is the pipeline used to construct the dataset in the experiment of the paper "PepRec: Progressive Enhancement of Prompting for Recommendation"? | Your answer should be a python strings. | [
"PepRec: Progressive Enhancement of Prompting for Recommendation"
] | [
"Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects"
] | [] |
9640f248-b594-56b4-9cc6-9b242538ee40 | In the direct preceding work of H2O, are there any metrics tested other than perplexity? | Your answer should be a python boolean. | [
"H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models",
"Efficient Streaming Language Models with Attention Sinks"
] | [] | [] |
966deaf9-83fe-5f5b-8b30-06c8e350375a | In the paper proposing Voila-A, which baselines do the main experimental results demonstrate that Voila-A outperform? | Your answer should be a single python list of strings, each string is a baseline name.e.g. ["baseline1_name", "baseline2_name"]. Note that for the names, the abbreviation is enough. | [] | [
"Voila-A: Aligning Vision-Language Models with User's Gaze Attention"
] | [
"neurips2024"
] |
969eef23-a666-5128-8553-069c1c546f0e | According to the author, what are the three main types of current dialogue evaluation catalogs? | Your answer should be a sentence stating the three main types. | [
"RADE: Reference-Assisted Dialogue Evaluation for Open-Domain Dialogue"
] | [] | [] |
985616d9-fbc1-5329-824f-15b2d1d79de0 | Is there any dataset that contains minimally-contrasting social situations that lead to different decisions about which behaviors are appropriate in that situation? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"NormBank: A Knowledge Bank of Situational Social Norms"
] | [
"acl2023"
] |
98f8113e-ec12-53ee-877c-ed347c655fbd | Which paper found that using common character encodings and ciphers, or even just convincing the model that it is not communicating in natural language, can bypass the safety guardrails of large models? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"GPT-4 Is Too Smart To Be Safe: Stealthy Chat with LLMs via Cipher"
] | [
"iclr2024"
] |
99231726-94e7-5dbf-80a5-947df93d9761 | Several information extraction and natural language processing tools were used in the anchor paper. What is the major limitation of prior works that the entity and relationship extractor used in the anchor paper was designed to solve? | Your answer should be brief text explaining the limitation solved by the extractor's design. | [
"Exploring Scientific Hypothesis Generation with Mamba"
] | [
"Packed Levitated Marker for Entity and Relation Extraction"
] | [] |
9945247a-acf3-5768-9e8c-3015d272434d | According to the RAV paper, up to 2019, which method performs the best overall on evidence retrieval? Additionally, what's that method's FEVER score with 1 sentence selected for the subtask of recognizing textual entailment? | Your answer should be a Python list of 2 elements, the first is a string, the name of the method, and the second is a float, rounding to 2 decimal places. | [
"Evidence Retrieval is almost All You Need for Fact Verification"
] | [
"UKP-Athene: Multi-Sentence Textual Entailment for Claim Verification",
"FEVER: a large-scale dataset for Fact Extraction and VERification",
"Reasoning Over Semantic-Level Graph for Fact Checking"
] | [] |
997b8d8f-3d5f-57e6-9590-9e8c5591dacc | What are the model architecture parameters (dim, n_layers, head_dim, hidden_dim, n_heads, n_kv_heads) of the automatic evaluation model used in the aspect evaluation section of the paper? | Your answer should be a python dictionary with the following keys: dim, n_layers, head_dim, hidden_dim, n_heads, n_kv_heads, e.g., {"dim": 768, "n_layers": 6, "head_dim": 64, "hidden_dim": 3072, "n_heads": 12, "n_kv_heads": 12}. | [
"Translating Across Cultures: LLMs for Intralingual Cultural Adaptation"
] | [
"Mixtral of Experts"
] | [] |
99965706-7450-5c82-9db7-a9f9605c5fc6 | Which research paper leverages event structure information from Abstract Meaning Representation (AMR) graphs to aid in recognizing causal relations between events? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Semantic Structure Enhanced Event Causality Identification"
] | [
"acl2023"
] |
99b40a01-942a-571c-9eb8-6be0ef070617 | What are the four types of compression methods used for BERT in the paper? Which one of them can be combined with the other three model compression methods? Can you describe the sketch of this method's procedure? | Your answer should be a brief text containing the four compression methods used for BERT in the paper and the method can be combined with other three methods with its procedure's sketch. | [
"Are Compressed Language Models Less Subgroup Robust?"
] | [
"Fast Vocabulary Transfer for Language Model Compression"
] | [] |
99cec4b0-ca19-56bb-83a0-7a79a4a14c9d | What is the distribution ratio of data sources for the toxicity ratings dataset used in the paper? | Your answer should be a python dictionary about the data sources and their distribution ratio(between 0 and 1, rounded to 2 decimal places), e.g., {"data_source_1": 0.49, "data_source_2": 0.51}. YOU MUST USE THE EXACT NAMES FROM THE PDF WITHOUT CHANGING THE CAPITALIZATION. | [
"Rater Cohesion and Quality from a Vicarious Perspective"
] | [
"Designing Toxic Content Classification for a Diversity of Perspectives"
] | [] |
99d793f3-fa1b-5d68-a62b-ace9cdeca097 | How much higher is the percentage of "食品#品质" in Figure: Aspect category distributions(Test Set for Subtask3) in the Overview of the SIGHAN 2024 paper than the percentage of "食品#品质" in Figure: training data category distributions of HITSZ-HLT? | Your answer should be a python float number with three decimal places, and the answer should be in the range of [0, 1] | [
"Overview of the SIGHAN 2024 shared task for Chinese dimensional aspect-based sentiment analysis",
"HITSZ-HLT at SIGHAN-2024 dimABSA Task: Integrating BERT and LLM for Chinese Dimensional Aspect-Based Sentiment Analysis"
] | [] | [] |
9a6dbbf5-4323-506d-aa1d-fbcbed930d35 | In the paper titled "TeamShakespeare at SemEval-2023 Task 6: Understand Legal Documents with Contextualized Large Language Models", which model is based on when designing the model for NER task? For this base model, on which tasks is it evaluated in the main experiment of the source paper? | Your answer should be a single python list like ["base_model_name",["task1","task2",...]].Note that for the task name, you should use the full name, not the abbreviation. | [
"TeamShakespeare at SemEval-2023 Task 6: Understand Legal Documents with Contextualized Large Language Models"
] | [
"LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention"
] | [] |
9a8224a4-c359-553a-b334-e1339c47a8a7 | What are the main differences between the backbone model used in the main experiments of the paper "VISION-BY-LANGUAGE FOR TRAINING-FREE COMPOSITIONAL IMAGE RETRIEVAL" and the standard Transformer? | Your answer should be a python strings. | [
"Vision-by-Language for Training-Free Compositional Image Retrieval"
] | [
"An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale"
] | [] |
9adcad9f-bd01-54de-a6cd-61d0a77e487d | According to the paper "Geometric Deep Learning Grids, Groups, Graphs, Geodesics, and Gauges", what are the three main flavours of GNN layers? What's the relationship between GTs and GNN? | Your answer should be a python list of two elements, teh first one is alist of three flavours, and the second one is a python strings. | [
"Graph Language Models",
"Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges"
] | [] | [] |
9b05a21b-7190-547c-ae1c-2a4b21a84826 | Is there a paper that uses similarity scores to check knowledge in diffusion models | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Multilingual Conceptual Coverage in Text-to-Image Models"
] | [
"acl2023"
] |
9bd68a1f-c2de-5448-b7b0-ddbc43b2165a | What is the detailed procedure of progressive learning followed in the instruction tuning section of this paper? | Your answer should be a python strings. | [
"Birdie: Advancing State Space Language Modeling with Dynamic Mixtures of Training Objectives"
] | [
"Orca 2: Teaching Small Language Models How to Reason"
] | [] |
9c1a0663-93b8-5ce8-9863-a837c86565c3 | How many hours and separate recordings are contained in the STT4SG-350 corpus? | Your answer should be s single python list of two integers, e.g. [200, 5674]. The first integer represents hours, the second integer represents recordings. Note that you should use exact numbers, not approximations. | [] | [
"STT4SG-350: A Speech Corpus for All Swiss German Dialect Regions"
] | [
"acl2023"
] |
9c51f6a0-85e7-51b9-a27c-679686eeb8e2 | In the architecture of MolMIM, which encoder is used? What's the core idea of the designing of this encoder? | Your answer should be a single python list of two strings, like ["encoder_name","sentences_about_core_idea"] | [
"Improving Small Molecule Generation using Mutual Information Machine"
] | [
"Perceiver: General Perception with Iterative Attention"
] | [] |
9c79b323-e07b-5af8-93b5-d69b4b8d0cff | In the benchmark C-LAP uses for image observations evaluation, Offline DV2 performs the best in which environment under mixed setting? | Your answer should be a python string, the name of the environment WITHOUT ANY explanation. | [
"Constrained Latent Action Policies for Model-Based Offline Reinforcement Learning"
] | [
"Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations"
] | [] |
9ce8e94a-4eda-5f28-90cc-1342a03c3e51 | What dataset does this paper("titled What Makes it Ok to Set a Fire? Iterative Self-distillation of Contexts and Rationales for Disambiguating Defeasible Social and Moral Situations") use for evaluation? According to its source paper, how many structured annotations does this dataset collect in total? | Your answer should be a single python list, the first element is the string of the dataset name, the second element is an integer number. | [
"What Makes it Ok to Set a Fire? Iterative Self-distillation of Contexts and Rationales for Disambiguating Defeasible Social and Moral Situations"
] | [
"Social Chemistry 101: Learning to Reason about Social and Moral Norms"
] | [] |
9ceca9e2-fa5e-5eb0-80be-d8f588116c1e | I'm interested in the method used by the UCB1-FLAD paper for dataset format transformation, and I would like to contribute to that method. Where can I find the guidelines? | Your answer should be a Python string, the website URL starting with "https://", as given in the paper. | [
"Improving Few-Shot Generalization by Exploring and Exploiting Auxiliary Data"
] | [
"PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts"
] | [] |
9cf5ab0b-d8e0-5cee-b756-ba9f5f95b220 | Among existing Math and STEM QA datasets, which dataset includes theorem except TheoremQA? In which paper is this dataset introduced? | Your answer should be a list of two strings, the first element is the name of the dataset, and the second element is the title of the source paper. | [
"TheoremQA: A Theorem-driven Question Answering Dataset"
] | [] | [] |
9d8e79c6-a8b8-5c1c-8b3c-e995166a26f7 | In the paper that proposes TRL model which manages to surpass ChatGPT on some category on average according to the QATCH paper, what's the formula of the total loss in detail? | Your answer should be a string, the formula in LaTeX format. Note that you should expand the three parts of the total loss. | [
"QATCH: Benchmarking SQL-centric tasks with Table Representation Learning Models on Your Data"
] | [
"TAPAS: Weakly Supervised Table Parsing via Pre-training"
] | [] |
9e064ab4-16d4-572a-a37f-3121074570b0 | In the Comparisons experiment of the Metaworld benchmark in the paper "Prediction with Action: Visual Policy Learning via Joint Denoising Process", which benchmarks performed the best on easier tasks and harder tasks, respectively (excluding the method proposed in this article)? | Your answer should be a python list of strings, first element is the best performing benchmark on easier tasks, and the second element is the best performing benchmark on harder tasks. YOU MUST USE THE EXACT ABBREVIATION AS IN THE PAPER. | [
"Prediction with Action: Visual Policy Learning via Joint Denoising Process"
] | [] | [] |
9e0c4e78-b458-53fa-942a-e24a93316e0f | In the experimental results on StepGame of this paper, which one of the three PLMs gains the best performance when k=4? Please provide the name of the model. And what is this model's github link? | Your answer should be a Python list of 2 strings, the name of the model, and the github link of this model. | [
"DepWiGNN: A Depth-wise Graph Neural Network for Multi-hop Spatial Reasoning in Text"
] | [
"ALBERT: A Lite BERT for Self-supervised Learning of Language Representations"
] | [] |
9e5cea3f-285b-5879-a57a-8c4a19c0236d | Which dataset is GrailQAbility dataset derived from? How many literals are there in the source dataset? | Your answer should be a single python list like ["dataset_name",100].The first element of the list is a string and the second is an integer number. | [
"Do I have the Knowledge to Answer? Investigating Answerability of Knowledge Base Questions"
] | [
"Beyond I.I.D.: Three Levels of Generalization for Question Answering on Knowledge Bases"
] | [] |
9eb8dc14-f435-5774-ac7d-1bc63d5b72b4 | Which website can I find the conversation video between Sheldon and Leonard? | Your answer should be a pure text string starting with "https". DO NOT INCLUDE ANY OTHER INFORMATION OR CONTEXT IN THE ANSWER. | [
"MTP: A Dataset for Multi-Modal Turning Points in Casual Conversations"
] | [] | [] |
9f1e23b7-05ab-512e-8568-8d6fc0e95993 | In the smallest dataset that the given paper applies, how is the mutual information calculated? | Your answer should be a string, the formula of mutual information in LaTeX format. | [
"Semi-Supervised Fine-Tuning of Vision Foundation Models with Content-Style Decomposition"
] | [
"Galaxy Zoo DECaLS: Detailed Visual Morphology Measurements from Volunteers and Deep Learning for 314,000 Galaxies"
] | [] |
9f4464f1-93bd-58ea-9d06-be56dfc0f60b | Which numerical reasoning paper first published a dataset that considers different types of size of numbers and their representations in arithmetic questions? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"FERMAT: An Alternative to Accuracy for Numerical Reasoning"
] | [
"acl2023"
] |
9f506412-ee1f-5e65-af24-4f2a07fa9948 | Which object detector does this paper(titled "Can VLMs Play Action Role-Playing Games? Take Black Myth Wukong as a Study Case") use to assist the VLMs in better extracting useful information?On which datasets is this object detector pre-trained? | Your answer should be a list like ["detector_name", ["dataset1","dataset2",...]], where detector_name is the name of the object detector and ["dataset1","dataset2",...] is a list of dataset names(abbreviation) on which the object detector is pre-trained. | [
"Can VLMs Play Action Role-Playing Games? Take Black Myth Wukong as a Study Case"
] | [
"Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"
] | [] |
a06cf968-8d7c-5d7a-b203-91bb312150b7 | Is there any paper that uses data collected from the Dark Web, specifically onion domains, to pretrain a language model? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"DarkBERT: A Language Model for the Dark Side of the Internet"
] | [
"acl2023"
] |
a0f6a1ac-45b3-5017-b320-9c698ae14d7a | Which one is newer among the public datasets used in the main experiment of this paper("Query Routing for Homogeneous Tools: An Instantiation in the RAG Scenario")? Which link may I refer to to get this dataset? | Your answer should be a python list like ["string1", "string2"]. The first element should be the name(abbrievation) of the dataset. The second element should be a string, representing the link to the dataset. | [
"Query Routing for Homogeneous Tools: An Instantiation in the RAG Scenario"
] | [
"Let LLMs Take on the Latest Challenges! A Chinese Dynamic Question Answering Benchmark"
] | [] |
a1096589-1d1d-5577-bc3c-abd8a43a5b57 | What are the parameters of the base models used in the paper "PsychoLex: Unveiling the Psychological Mind of Large Language Models", including Layers, Model Dimension, FFN Dimension, Attention Heads and Key/Value Heads? | Your answer should be a nested dictionary, e.g., {'base_model1': {'Layers': 12, 'Model Dimension': 768, 'FFN Dimension': 3072, 'Attention Heads': 12, 'Key/Value Heads': 12}} | [
"PsychoLex: Unveiling the Psychological Mind of Large Language Models"
] | [
"The Llama 3 Herd of Models"
] | [] |
a18be027-94fa-53c8-9055-0f3066cc7ae8 | In the paper "Distributional Scaling Laws for Emergent Capabilities", what empirical evidence supports the RASP-Generalization Conjecture in the context of transformers' length generalization? | Your answer should be a sentence explaining the empirical evidence. | [
"Distributional Scaling Laws for Emergent Capabilities",
"What Algorithms can Transformers Learn? A Study in Length Generalization"
] | [] | [] |
a1bcf4ae-a49c-559d-83be-7e34162877d1 | What tasks were proposed in the paper of the dataset used in the experiment of the paper? | Your answer should be a python list of tasks, e.g. ["task1", "task2", ...]. YOU MUST USE THE EXACT NAMES FROM THE PDF WITHOUT CHANGING THE CAPITALIZATION. | [
"SARCAT: Generative Span-Act Guided Response Generation using Copy-enhanced Target Augmentation"
] | [
"MultiDoc2Dial: Modeling Dialogues Grounded in Multiple Documents"
] | [] |
a1f5f36d-4119-508c-8484-38d296db5e04 | In the paper titled "ON-TRAC consortium systems for the IWSLT 2023 dialectal and low-resource speech translation tasks", which dataset is used when training the translation model X $
ightarrow$ FR, EN? Which institute is this dataset released by? | Your answer should be a single python list like ["dataset_name","institute_name"].Note that you don't have to indicate the language pair in the dataset name. | [
"ON-TRAC Consortium Systems for the IWSLT 2023 Dialectal and Low-resource Speech Translation Tasks"
] | [
"CoVoST 2 and Massively Multilingual Speech-to-Text Translation"
] | [] |
a205abd2-6b89-55ea-afc9-8578fa39bf0d | In the results of topic modeling through LDA, which keyword is the most frequent among all the topics? How many times did it appear? The models' performance was measured using BERT-score. Which conference did the BERT-score authors present BERT-score at? | Your answer should be a brief text containing the most frequent keyword with its frequency and the conference name. | [
"EMO-KNOW: A Large Scale Dataset on Emotion-Cause"
] | [
"BERTScore: Evaluating Text Generation with BERT"
] | [] |
a24c70e9-4657-544f-aead-7f59db7b62b5 | Which datasets were used in the specific private text domain for experiments with the document-level machine translation framework in the paper "Granularity is crucial when applying differential privacy to text: An Investigation for Neural Machine Translation"? | Your answer should be a python list, e.g., ['dataset1', 'dataset2', ...]. YOU MUST USE THE EXACT NAMES OF THE DATASETS, RATHER THAN ABBREVIATIONS OR ALIASES. | [
"Granularity is crucial when applying differential privacy to text: An investigation for neural machine translation"
] | [
"DP-NMT: Scalable Differentially Private Machine Translation"
] | [] |
a2985096-8453-5fb7-9066-6f505c734248 | List the names of the baselines used in the paper "On the Compositional Generalization in Versatile Open-domain Dialogue", along with the titles of papers that proposed these baselines. | Your answer should be a Python list of baseline-title pair, e.g., [["baseline1", "title1"] , ["baseline2", "title2"], ...]. YOU MUST USE THE EXACT TEXT FROM THE PAPER AND THE FULL TITLE TEXT OF THE PAPERS. | [
"On the Compositional Generalization in Versatile Open-domain Dialogue"
] | [
"BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension",
"Language Models that Seek for Knowledge: Modular Search & Generation for Dialogue and Prompt Completion",
"Prefix-Tuning: Optimizing Continuous Prompts for Generation",
"Combining Modular Skills in Multitask Learning"
] | [] |
a29af26f-73cc-5522-a388-638ecd3c09d3 | In the biography generation task that the Self-RAG paper applied, what're the five categories of freqValue? | Your answer should be a Python list of 5 strings, the 5 categories of freqValue. | [] | [
"Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection",
"FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation"
] | [
"iclr2024"
] |
a2ae7148-87aa-513c-81a3-921c09c7e35e | According to "Casting Hybrid Digital-Analog Training into Hierarchical Energy-Based Learning", how can the delta of energy function be used in the EP gradient of EBM and BP gradient of FF module? | Your answer should be in fluential English. | [
"Casting hybrid digital-analog training into hierarchical energy-based learning"
] | [] | [] |
a2f8cca9-a522-5fd2-bf2e-829fc82f749e | From which three aspects do the authors evaluate the unlearned model? | Your answer should be a python list, each element is a string. | [
"Machine Unlearning of Pre-trained Large Language Models"
] | [] | [] |
a343069f-cdd9-58b2-9abb-afb91e8f5360 | In the paper "Strengthened Symbol Binding Makes Large Language Models Reliable Multiple-Choice Selectors", on which dataset does PIF method reach its highest accuracy? In the paper where that dataset is proposed, which LLM performed the best, and how to account for its performance? | Your answer should be a Python list of three strings. The first string indicating the full name of the dataset, the second indicating the name of the LLM that performed the best, and third indicating the reason. e.g. ["dataset", "LLM", "reason"]. | [
"Strengthened Symbol Binding Makes Large Language Models Reliable Multiple-Choice Selectors"
] | [
"Measuring Massive Multitask Language Understanding",
"CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge",
"C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models"
] | [] |
a3555904-aa5f-5f4d-b823-b51bacf04995 | Which paper introduced the human-evaluated timeliness metric for misinformation detection? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Human-in-the-loop Evaluation for Early Misinformation Detection: A Case Study of COVID-19 Treatments"
] | [
"acl2023"
] |
a36db3b8-16d0-5a14-bebb-3e3eea363d27 | What are some evaluation benchmarks for LLM privacy at inference time, targeted towards model input and NOT the training data. | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory"
] | [
"iclr2024"
] |
a3c6958b-aed2-5e28-8dea-5d0b88550ac8 | According to this survey, what're the three most recent decoder-only LLMs for NL2Code? How many programming languages do their training datasets each contain? | Your answer should be a Python dictionary of 3 key-value pairs, where each key is a string and each value is an integer. | [
"Large Language Models Meet NL2Code: A Survey"
] | [
"Efficient Training of Language Models to Fill in the Middle",
"CERT: Continual Pre-Training on Sketches for Library-Oriented Code Generation",
"CodeGeeX: A Pre-Trained Model for Code Generation with Multilingual Benchmarking on HumanEval-X",
"BLOOM: A 176B-Parameter Open-Access Multilingual Language Model",
"SantaCoder: don't reach for the stars!"
] | [] |
a4359833-c1ee-5d01-b18d-1d1a78c749f0 | Is there any work that attacks language models in dialogue generation? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"White-Box Multi-Objective Adversarial Attack on Dialogue Generation"
] | [
"acl2023"
] |
a4913d15-35e1-511a-affd-ef4782d08df9 | I read two papers called Chain of Ideas and Nova, and they both involve the evaluation of the characteristics of ideas, such as quality, novelty, and so on. I want to know the specific measurement aspects of these two papers. | Your answer should be a brief text. | [
"Nova: An Iterative Planning and Search Approach to Enhance Novelty and Diversity of LLM Generated Ideas",
"Chain of Ideas: Revolutionizing Research Via Novel Idea Development with LLM Agents"
] | [] | [] |
a49ce3ea-3977-5eb8-8598-47342bcd60a3 | In figure 2, what is the next step to take after generating the supportive logical forms?Please use the name presented in this figure.And then for this step, what's the criteria held in this paper? | Your answer should be a list of two strings, the first element is the name of the next step, and the second element is sevaral sentences about the criteria. | [
"Prompting Large Language Models with Chain-of-Thought for Few-Shot Knowledge Base Question Generation"
] | [] | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.