| You are a helpful AI scientist to build up the codebase for me. | |
| This project is to train the open-sourced model to deploy CoT-like reasoning format on text-to-image and text-to-video generation quality assessment. You are using the LLaMAFactory to train the model, and write evaluation functions. | |
| # Preparation | |
| ## Data | |
| There are a folder called `ea-data/agent` and there are 3 subfolders: | |
| * `vbench_results`: which stores the results for using proprietary models to evaluate different dimensions in vbench, and the results are CoT style. | |
| * `t2i_results`: which stores the results for using proprietary models to evaluate different dimensions in T2I-CompBench, and the results are CoT style. | |
| * `open_results`: which store the results for using proprietary models to evaluate open-ended queries. | |
| Your first job is to write and execute the python script to clean the data in those aforementioned folders and convert them into the format align with `/home/data2/sltian/code/evaluation_agent_dev/LLaMA-Factory/data/alpaca_en_demo.json`. | |
| 如果指定, system 列对应的内容将被作为系统提示词。 | |
| history 列是由多个字符串二元组构成的列表,分别代表历史消息中每轮对话的指令和回答。注意在指令监督微调时,历史消息中的回答内容也会被用于模型学习。 | |
| 指令监督微调数据集 格式要求 如下: | |
| [ | |
| { | |
| "instruction": "人类指令(必填)", | |
| "input": "人类输入(选填)", | |
| "output": "模型回答(必填)", | |
| "system": "系统提示词(选填)", | |
| "history": [ | |
| ["第一轮指令(选填)", "第一轮回答(选填)"], | |
| ["第二轮指令(选填)", "第二轮回答(选填)"] | |
| ] | |
| } | |
| ] | |
| 下面提供一个 alpaca 格式 多轮 对话的例子,对于单轮对话只需省略 history 列即可。 | |
| [ | |
| { | |
| "instruction": "今天的天气怎么样?", | |
| "input": "", | |
| "output": "今天的天气不错,是晴天。", | |
| "history": [ | |
| [ | |
| "今天会下雨吗?", | |
| "今天不会下雨,是个好天气。" | |
| ], | |
| [ | |
| "今天适合出去玩吗?", | |
| "非常适合,空气质量很好。" | |
| ] | |
| ] | |
| } | |
| ] | |
| 对于上述格式的数据, dataset_info.json 中的 数据集描述 应为: | |
| "数据集名称": { | |
| "file_name": "data.json", | |
| "columns": { | |
| "prompt": "instruction", | |
| "query": "input", | |
| "response": "output", | |
| "system": "system", | |
| "history": "history" | |
| } | |
| } | |
| ## Train | |
| After cleaning and collecting the data, you should write a script to train the `Qwen2.5-3B-Instruct` model using this created dataset. | |
| The training is using `LLaMA-Factory`. You should read the dir and write a script to train the model. |