siyich commited on
Commit
bfb6a86
·
verified ·
1 Parent(s): eb458b4

Upload data and config files

Browse files
Files changed (2) hide show
  1. data/train.parquet +3 -0
  2. toolshed_config.yaml +470 -0
data/train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:78d3ea7fc4c8fc294575913c669e0116616f2a73069c0947847c51b5ae639473
3
+ size 872841
toolshed_config.yaml ADDED
@@ -0,0 +1,470 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ tools:
2
+ - class_name: toolshed.integration.verl.ToolshedMethodTool
3
+ config:
4
+ type: native
5
+ router_name: toolshed_router
6
+ namespace: toolshed
7
+ function_name: robot.capture_image
8
+ timeout: 30
9
+ tool_schema:
10
+ type: function
11
+ function:
12
+ name: robot.capture_image
13
+ description: 'Capture an RGB image from the robot''s camera showing the current
14
+ scene.
15
+
16
+ Text output: Image dimensions and capture status.
17
+
18
+ Image output: RGB image from robot camera.'
19
+ parameters:
20
+ type: object
21
+ properties: {}
22
+ required: []
23
+ - class_name: toolshed.integration.verl.ToolshedMethodTool
24
+ config:
25
+ type: native
26
+ router_name: toolshed_router
27
+ namespace: toolshed
28
+ function_name: robot.execute_grasp
29
+ timeout: 30
30
+ tool_schema:
31
+ type: function
32
+ function:
33
+ name: robot.execute_grasp
34
+ description: "Execute a grasp by moving the robot to the specified pose via\
35
+ \ a pre-grasp point, and closing the gripper.\nText output: Status of the\
36
+ \ grasp execution.\n\nArgs:\n grasp_pose: 4\xD74 transformation matrix (list\
37
+ \ or numpy array) representing the grasp pose\n in the robot's camera frame\
38
+ \ (OpenCV convention).\n\nReturns:\n ToolResult: value dict with ``success``\
39
+ \ (bool) and ``execution_time_s`` (float)."
40
+ parameters:
41
+ type: object
42
+ properties:
43
+ grasp_pose:
44
+ type: string
45
+ description: Parameter grasp_pose
46
+ required:
47
+ - grasp_pose
48
+ - class_name: toolshed.integration.verl.ToolshedMethodTool
49
+ config:
50
+ type: native
51
+ router_name: toolshed_router
52
+ namespace: toolshed
53
+ function_name: robot.get_depth
54
+ timeout: 30
55
+ tool_schema:
56
+ type: function
57
+ function:
58
+ name: robot.get_depth
59
+ description: 'Retrieve depth map from the robot''s depth sensor.
60
+
61
+ Text output: Summary of depth data including image dimensions, focal length,
62
+ and depth statistics.'
63
+ parameters:
64
+ type: object
65
+ properties: {}
66
+ required: []
67
+ - class_name: toolshed.integration.verl.ToolshedMethodTool
68
+ config:
69
+ type: native
70
+ router_name: toolshed_router
71
+ namespace: toolshed
72
+ function_name: robot.get_depth_with_pointcloud
73
+ timeout: 30
74
+ tool_schema:
75
+ type: function
76
+ function:
77
+ name: robot.get_depth_with_pointcloud
78
+ description: 'Retrieve depth map from robot''s depth sensor and generate 3D
79
+ point cloud.
80
+
81
+ Text output: Summary of depth data and point cloud generation including dimensions,
82
+ focal length, depth statistics, and point cloud size.'
83
+ parameters:
84
+ type: object
85
+ properties: {}
86
+ required: []
87
+ - class_name: toolshed.integration.verl.ToolshedMethodTool
88
+ config:
89
+ type: native
90
+ router_name: toolshed_router
91
+ namespace: toolshed
92
+ function_name: robot.place_object_at_2d_location
93
+ timeout: 30
94
+ tool_schema:
95
+ type: function
96
+ function:
97
+ name: robot.place_object_at_2d_location
98
+ description: "Simulate placing object at 2D location (always succeeds).\nText\
99
+ \ output: Confirmation that placement was successful.\n\nArgs:\n placement_point_2d:\
100
+ \ 2D normalized image coordinate [x, y] in the range [0, 1]\n where the object\
101
+ \ should be placed.\n\nReturns:\n ToolResult: value dict with ``success``\
102
+ \ (bool) and ``execution_time_s`` (float)."
103
+ parameters:
104
+ type: object
105
+ properties:
106
+ placement_point_2d:
107
+ type: string
108
+ description: Parameter placement_point_2d
109
+ required:
110
+ - placement_point_2d
111
+ - class_name: toolshed.integration.verl.ToolshedMethodTool
112
+ config:
113
+ type: native
114
+ router_name: toolshed_router
115
+ namespace: toolshed
116
+ function_name: robot.place_object_at_3d_location
117
+ timeout: 30
118
+ tool_schema:
119
+ type: function
120
+ function:
121
+ name: robot.place_object_at_3d_location
122
+ description: "Simulate placing object at 3D location (always succeeds).\nText\
123
+ \ output: Confirmation that placement was successful.\n\nArgs:\n placement_point_3d:\
124
+ \ 3D point [x, y, z] in the robot's camera frame (list or numpy array)\n where\
125
+ \ the object should be placed.\n\nReturns:\n ToolResult: value dict with ``success``\
126
+ \ (bool) and ``execution_time_s`` (float)."
127
+ parameters:
128
+ type: object
129
+ properties:
130
+ placement_point_3d:
131
+ type: string
132
+ description: Parameter placement_point_3d
133
+ required:
134
+ - placement_point_3d
135
+ - class_name: toolshed.integration.verl.ToolshedMethodTool
136
+ config:
137
+ type: native
138
+ router_name: toolshed_router
139
+ namespace: toolshed
140
+ function_name: vision_ops.index_at
141
+ timeout: 30
142
+ tool_schema:
143
+ type: function
144
+ function:
145
+ name: vision_ops.index_at
146
+ description: 'Get the pixel value in the numpy ndarray `data` at the given normalized
147
+ coordinates (u,v).
148
+
149
+ Note the input arguments are called `data`, `u`, `v`.
150
+
151
+
152
+ Text output: Information about the pixel value at the given coordinates.'
153
+ parameters:
154
+ type: object
155
+ properties:
156
+ data:
157
+ type: string
158
+ description: Numpy ndarray of shape (H, W) or (H, W, C), or PIL Image
159
+ u:
160
+ type: number
161
+ description: Normalized x-coordinate in [0, 1]
162
+ v:
163
+ type: number
164
+ description: Normalized y-coordinate in [0, 1]
165
+ required:
166
+ - data
167
+ - u
168
+ - v
169
+ - class_name: toolshed.integration.verl.ToolshedMethodTool
170
+ config:
171
+ type: native
172
+ router_name: toolshed_router
173
+ namespace: toolshed
174
+ function_name: bounding_box.compute_bbox
175
+ timeout: 30
176
+ tool_schema:
177
+ type: function
178
+ function:
179
+ name: bounding_box.compute_bbox
180
+ description: 'Compute an oriented bounding box for a masked subset of a point
181
+ cloud.
182
+
183
+ Text output: Summary containing number of input points, the point coordinates
184
+ in 3d and 2d,
185
+
186
+ mask shape, box extents, and edges.'
187
+ parameters:
188
+ type: object
189
+ properties:
190
+ point_cloud:
191
+ type: string
192
+ description: scene point cloud, np.ndarray of shape (N, 3) with float
193
+ values.
194
+ mask:
195
+ type: string
196
+ description: np.ndarray of shape (H, W) with boolean values segmenting
197
+ the target object. Dimensions are used for camera projection.
198
+ focal_length_px:
199
+ type: number
200
+ description: Camera focal length in pixels (square pixels assumed).
201
+ required:
202
+ - point_cloud
203
+ - mask
204
+ - focal_length_px
205
+ - class_name: toolshed.integration.verl.ToolshedMethodTool
206
+ config:
207
+ type: native
208
+ router_name: toolshed_router
209
+ namespace: toolshed
210
+ function_name: sam2.segment_from_point
211
+ timeout: 30
212
+ tool_schema:
213
+ type: function
214
+ function:
215
+ name: sam2.segment_from_point
216
+ description: 'Segment the object at normalized coordinates.
217
+
218
+ Text output: Summary of segmentation including mask dimensions and IoU (Intersection
219
+ over Union) confidence score.'
220
+ parameters:
221
+ type: object
222
+ properties:
223
+ x:
224
+ type: number
225
+ description: X-coordinate in normalized range [0, 1].
226
+ y:
227
+ type: number
228
+ description: Y-coordinate in normalized range [0, 1].
229
+ image_index:
230
+ type: integer
231
+ description: 0-based index of an image already present in the conversation.
232
+ Use this to reference an inline image by its order (0 = first, 1 = second,
233
+ ...).
234
+ required:
235
+ - x
236
+ - y
237
+ - image_index
238
+ - class_name: toolshed.integration.verl.ToolshedMethodTool
239
+ config:
240
+ type: native
241
+ router_name: toolshed_router
242
+ namespace: toolshed
243
+ function_name: sam2.segment_from_points
244
+ timeout: 30
245
+ tool_schema:
246
+ type: function
247
+ function:
248
+ name: sam2.segment_from_points
249
+ description: 'Segment an object using multiple normalized coordinates.
250
+
251
+ Text output: Summary of segmentation including mask dimensions and IoU scores
252
+ for candidate masks.'
253
+ parameters:
254
+ type: object
255
+ properties:
256
+ points:
257
+ type: string
258
+ description: Sequence of (x, y) normalized coordinates in [0, 1] range.
259
+ image_index:
260
+ type: integer
261
+ description: 0-based index of an image already present in the conversation.
262
+ Use this to reference an inline image by its order (0 = first, 1 = second,
263
+ ...).
264
+ required:
265
+ - points
266
+ - image_index
267
+ - class_name: toolshed.integration.verl.ToolshedMethodTool
268
+ config:
269
+ type: native
270
+ router_name: toolshed_router
271
+ namespace: toolshed
272
+ function_name: depth_estimator.estimate_depth
273
+ timeout: 30
274
+ tool_schema:
275
+ type: function
276
+ function:
277
+ name: depth_estimator.estimate_depth
278
+ description: 'Estimate depth map from a single image.
279
+
280
+ Text output: Summary of depth estimation including image dimensions, focal
281
+ length, and depth statistics.'
282
+ parameters:
283
+ type: object
284
+ properties:
285
+ image_index:
286
+ type: integer
287
+ description: 0-based index of an image already present in the conversation.
288
+ Use this to reference an inline image by its order (0 = first, 1 = second,
289
+ ...).
290
+ required:
291
+ - image_index
292
+ - class_name: toolshed.integration.verl.ToolshedMethodTool
293
+ config:
294
+ type: native
295
+ router_name: toolshed_router
296
+ namespace: toolshed
297
+ function_name: depth_estimator.estimate_depth_with_pointcloud
298
+ timeout: 30
299
+ tool_schema:
300
+ type: function
301
+ function:
302
+ name: depth_estimator.estimate_depth_with_pointcloud
303
+ description: 'Estimate depth and generate 3D point cloud from a single image.
304
+
305
+ Text output: Summary of depth estimation and point cloud generation including
306
+ dimensions, focal length, depth statistics, and point cloud size.'
307
+ parameters:
308
+ type: object
309
+ properties:
310
+ image_index:
311
+ type: integer
312
+ description: 0-based index of an image already present in the conversation.
313
+ Use this to reference an inline image by its order (0 = first, 1 = second,
314
+ ...).
315
+ required:
316
+ - image_index
317
+ - class_name: toolshed.integration.verl.ToolshedMethodTool
318
+ config:
319
+ type: native
320
+ router_name: toolshed_router
321
+ namespace: toolshed
322
+ function_name: grasp_generator.compute_grasp
323
+ timeout: 30
324
+ tool_schema:
325
+ type: function
326
+ function:
327
+ name: grasp_generator.compute_grasp
328
+ description: 'Generate a single grasp pose for a masked subset of a point cloud.
329
+
330
+ Text output: Confidence score, number of masked points used, projected 2D
331
+ gripper points in normalize image coordinates.'
332
+ parameters:
333
+ type: object
334
+ properties:
335
+ point_cloud:
336
+ type: string
337
+ description: Nx3 numpy float array. Full scene point cloud.
338
+ mask:
339
+ type: string
340
+ description: Boolean mask aligning with *image*; accepts ndarray. Indicates
341
+ object points.
342
+ focal_length_px:
343
+ type: number
344
+ description: Camera focal length in pixels (square pixels assumed).
345
+ image_index:
346
+ type: integer
347
+ description: 0-based index of an image already present in the conversation.
348
+ Use this to reference an inline image by its order (0 = first, 1 = second,
349
+ ...).
350
+ required:
351
+ - point_cloud
352
+ - mask
353
+ - focal_length_px
354
+ - image_index
355
+ - class_name: toolshed.integration.verl.ToolshedMethodTool
356
+ config:
357
+ type: native
358
+ router_name: toolshed_router
359
+ namespace: toolshed
360
+ function_name: roborefer.detect_all
361
+ timeout: 30
362
+ tool_schema:
363
+ type: function
364
+ function:
365
+ name: roborefer.detect_all
366
+ description: 'Detect *all* instances of *obj_name* in *image*.
367
+
368
+ Text output: List of point coordinates for the detected objects, in normalized
369
+ pixel space in range [0, 1].'
370
+ parameters:
371
+ type: object
372
+ properties:
373
+ obj_name:
374
+ type: string
375
+ description: Name or description of the object to detect.
376
+ image_index:
377
+ type: integer
378
+ description: 0-based index of an image already present in the conversation.
379
+ Use this to reference an inline image by its order (0 = first, 1 = second,
380
+ ...).
381
+ required:
382
+ - obj_name
383
+ - image_index
384
+ - class_name: toolshed.integration.verl.ToolshedMethodTool
385
+ config:
386
+ type: native
387
+ router_name: toolshed_router
388
+ namespace: toolshed
389
+ function_name: roborefer.detect_one
390
+ timeout: 30
391
+ tool_schema:
392
+ type: function
393
+ function:
394
+ name: roborefer.detect_one
395
+ description: 'Detect *one* instance of *obj_name* in *image*.
396
+
397
+ Text output: coordinates of a single point for the first instance of the object,
398
+ in normalized pixel space in range [0, 1].'
399
+ parameters:
400
+ type: object
401
+ properties:
402
+ obj_name:
403
+ type: string
404
+ description: Parameter obj_name
405
+ image_index:
406
+ type: integer
407
+ description: 0-based index of an image already present in the conversation.
408
+ Use this to reference an inline image by its order (0 = first, 1 = second,
409
+ ...).
410
+ required:
411
+ - obj_name
412
+ - image_index
413
+ - class_name: toolshed.integration.verl.ToolshedMethodTool
414
+ config:
415
+ type: native
416
+ router_name: toolshed_router
417
+ namespace: toolshed
418
+ function_name: vlm.detect_all
419
+ timeout: 30
420
+ tool_schema:
421
+ type: function
422
+ function:
423
+ name: vlm.detect_all
424
+ description: 'Detect *all* instances of *obj_name* in *image*.
425
+
426
+ Text output: List of point coordinates for the detected objects, in normalized
427
+ pixel space in range [0, 1].'
428
+ parameters:
429
+ type: object
430
+ properties:
431
+ obj_name:
432
+ type: string
433
+ description: Name or description of the object to detect.
434
+ image_index:
435
+ type: integer
436
+ description: 0-based index of an image already present in the conversation.
437
+ Use this to reference an inline image by its order (0 = first, 1 = second,
438
+ ...).
439
+ required:
440
+ - obj_name
441
+ - image_index
442
+ - class_name: toolshed.integration.verl.ToolshedMethodTool
443
+ config:
444
+ type: native
445
+ router_name: toolshed_router
446
+ namespace: toolshed
447
+ function_name: vlm.detect_one
448
+ timeout: 30
449
+ tool_schema:
450
+ type: function
451
+ function:
452
+ name: vlm.detect_one
453
+ description: 'Detect *one* instance of *obj_name* in *image*.
454
+
455
+ Text output: coordinates of a single point for the first instance of the object,
456
+ in normalized pixel space in range [0, 1].'
457
+ parameters:
458
+ type: object
459
+ properties:
460
+ obj_name:
461
+ type: string
462
+ description: Parameter obj_name
463
+ image_index:
464
+ type: integer
465
+ description: 0-based index of an image already present in the conversation.
466
+ Use this to reference an inline image by its order (0 = first, 1 = second,
467
+ ...).
468
+ required:
469
+ - obj_name
470
+ - image_index