Data augmentation details

#1
by catherinexyz - opened

Hi, I think it is a good work! I notice in your arxiv paper, you stated that you use the data augmentation to generate training data for 3D visual grounding. However, I also notice that the data augmentation method has also been applied to some type of questions in Grounding COT data (for example first appearance order). I am wondering specifically what type of data augmentation technique you use for GCOT part of data, are you still only trying to augment the bounding boxes for the objects in the video? Did you also trying to actually manipulate object s’ location in the video too? Thanks!

for example , this QA pair:{'id': '1d235502-7047-4728-832a-fa47f6e622f0', 'data_source': 'arkitscenes', 'scene_name': '41048124', 'question_type': 'obj_appearance_order', 'conversations': [{'from': 'human', 'value': '\nThe video captures 3D spatial information of a scene. Your task is to focus on the spatial relationships in the video.\n\n1. Question Analysis\n- Carefully analyze the given question.\n- Identify relevant objects only if bounding box information would help in answering the question.\n- If bounding boxes are not useful for answering the question, skip object identification entirely.\n\n2. Bounding Box Output (conditional)\n- Output bounding boxes only if they are useful for answering the question.\n- For each relevant object, output its name, counts, and corresponding 3D bounding boxes in the format: (x1, y1, z1, x2, y2, z2)\n- If an object has multiple instances, list only distinct boxes, ensuring no duplicates, one by one.\n- If an object has too many instances, select and output up to 5 bounding boxes that are most relevant to the question.\n- If bounding boxes are not needed, skip this step entirely.\n\n3. Reasoning\n- Base your reasoning on all available information.\n- Derive the final answer strictly based on this reasoning.\n\n4. Output Format (STRICT)\nYour response must follow exactly this structure:\n\n[Analysing the question]\nIf bounding boxes are useful for answering the question, output them in the following format: object_name object_count (x1, y1, z1, x2, y2, z2)..., object_name object_count (x1, y1, z1, x2, y2, z2)(x1, y1, z1, x2, y2, z2)...\nIf bounding boxes are not useful for answering the question, skip this step entirely.\n[Reasoning grounded in all available information][Final concise answer here]\n\nQuestion: \nWhat will be the initial order of appearance of these items in the video: chair, table, tv_monitor, sofa?\nOptions:\nA. chair, table, tv_monitor, sofa\nB. sofa, chair, table, tv_monitor\nC. table, tv_monitor, sofa, chair\nD. tv_monitor, sofa, chair, table\n'}, {'from': 'gpt', 'value': 'The question is asking for the first-time appearance order of chair, table, tv_monitor, sofa. To solve this, I only need to focus on the temporal information in the video without identifying any object bounding boxes.\nCombined with video and the given options, the first-time appearance order of mentioned objects in the options is B. sofa, chair, table, tv_monitor.B'}], 'metadata': {'dataset': 'vsibench_cot', 'data_augmentation': {'rescale': {'flag': True, 'scale': 1.073053240776062}, 'rot_z': {'flag': True, 'rot': 1.0}, 'trans_x': {'flag': False, 'trans': 0}, 'trans_y': {'flag': True, 'trans': -0.025245095843190235}, 'trans_z': {'flag': True, 'trans': 0.8749931584187944}}}, 'video': 'arkitscenes/41048124'}
can you explain what objects have been applied the data augmentation techique? Are you try to do rotation, translation for all three objects' bounding boxes stated in the question? Thanks!

Sign up or log in to comment