selfevolveagent / examples /hotpotqa /debug /ourpipeline_new_0686.json
iLOVE2D's picture
Upload 2846 files
5374a2d verified
{
"class_name": "SequentialWorkFlowGraph",
"goal": "Answer the question using multi-hop reasoning grounded in retrieved context.",
"tasks": [
{
"name": "evidence_select",
"description": "Select the minimal set of evidence snippets needed to answer the question (multi-hop).",
"inputs": [
{
"name": "question",
"type": "str",
"description": "The question to answer.",
"required": true
}
],
"outputs": [
{
"name": "evidence",
"type": "list[dict]",
"description": "Chosen evidence with source info.",
"required": true
}
],
"prompt": "```\nYou are an evidence selector for HotpotQA. From the retrieved question `{question}` with context, select the smallest set of snippets that collectively support the answer (multi-hop if needed). Ensure that the selected evidence is precise, relevant, and sufficient to answer the question. If multiple snippets are available, prioritize those that provide the most direct support for the answer. Validate the evidence to confirm its reliability and relevance before proceeding to answer generation. If the evidence is insufficient, ambiguous, or contradictory, indicate that in your output. Output XML:\n<evidence>\n <item>\n <source>{source}</source>\n <title>{title}</title>\n <snippet>{snippet}</snippet>\n <url>{url}</url>\n </item>\n ...\n</evidence>\n```",
"prompt_template": {
"class_name": "StringTemplate",
"instruction": "You are an evidence selector for HotpotQA. From the retrieved question with context, pick the smallest set of snippets that jointly support the answer (multi-hop if needed). Prefer high-precision evidence. Output XML:\n<evidence>\n <item>\n <source>...</source>\n <title>...</title>\n <snippet>...</snippet>\n <url>...</url>\n </item>\n ...\n</evidence>"
},
"system_prompt": "You are a helpful and highly intelligent assistant.",
"parse_mode": "xml",
"parse_func": null,
"parse_title": null,
"tool_names": []
},
{
"name": "evidence_validate8428",
"description": "Task to evidence_validate8428. Takes evidence as input. Produces validated_evidence as output.",
"inputs": [
{
"name": "evidence",
"type": "str",
"description": "Input parameter evidence for evidence_validate8428",
"required": false
}
],
"outputs": [
{
"name": "validated_evidence",
"type": "str",
"description": "Output parameter validated_evidence from evidence_validate8428",
"required": true
}
],
"prompt": "\"\"\"\nThink step by step to answer the question based on the context provided. First, select evidence relevant to the question using the `{question}` context. Validate the selected evidence to ensure its relevance and reliability before proceeding to generate an answer. If the evidence is insufficient or irrelevant, indicate this and adjust your approach accordingly. Explain your thought process in the `<thought>` field, detailing how you selected and validated the evidence. Provide the final answer in the `<answer>` field, ensuring it is well-supported by the validated evidence. Format your output in XML format, such as `<thought>xxx</thought>` and `<answer>xxx</answer>`.\n\"\"\"",
"prompt_template": {
"class_name": "StringTemplate",
"instruction": "Think step by step to answer the question based on the question context. You should integrate context for answering. You should explain your thinking process in the 'thought' field, and provide the final answer in the 'answer' field.\nFormat your output in xml format, such as <thought>xxx</thought> and <answer>xxx</answer>."
},
"system_prompt": "You are a helpful and highly intelligent assistant.",
"parse_mode": "xml",
"parse_func": null,
"parse_title": null,
"tool_names": [
"DDGSSearchToolkit",
"WikipediaSearchToolkit",
"ArxivToolkit"
]
},
{
"name": "answer_generate",
"description": "Answer the question using ONLY the selected evidence; produce a concise final answer.",
"inputs": [
{
"name": "question",
"type": "str",
"description": "The question to answer.",
"required": true
},
{
"name": "evidence",
"type": "list[dict]",
"description": "Selected evidence snippets.",
"required": true
}
],
"outputs": [
{
"name": "answer",
"type": "str",
"description": "Final answer (short span when possible).",
"required": true
}
],
"prompt": "```\nYou are a HotpotQA answering model. Use ONLY the provided evidence to answer the question `{question}`. Ensure that the evidence selected is relevant and sufficient to support your answer. Validate the evidence thoroughly before proceeding to generate the answer. If the evidence does not directly answer the question or is insufficient, indicate that in your response. In cases of conflicting evidence or multiple possible answers, choose the most relevant one based on the context. Keep the answer concise (often a short span) and ensure it is formatted correctly in XML.\n\nOutput XML exactly in this format:\n<result>\n <answer>{answer}</answer>\n</result>\n```",
"prompt_template": {
"class_name": "StringTemplate",
"instruction": "You are a HotpotQA answering model. Use ONLY the provided evidence to answer the question. Keep the answer concise (often a short span).\n\nOutput XML exactly in this format:\n<result>\n <answer>...</answer>\n</result>"
},
"system_prompt": "You are a helpful and highly intelligent assistant.",
"parse_mode": "xml",
"parse_func": null,
"parse_title": null,
"tool_names": []
}
]
}