RZ412 commited on
Commit
2739bb2
·
verified ·
1 Parent(s): 4af6199

Upload 738 files

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. bfcl-irrelevance-103/environment/Dockerfile +12 -0
  2. bfcl-irrelevance-103/instruction.md +30 -0
  3. bfcl-irrelevance-103/solution/solve.sh +16 -0
  4. bfcl-irrelevance-103/task.toml +37 -0
  5. bfcl-irrelevance-103/tests/evaluate.py +178 -0
  6. bfcl-irrelevance-103/tests/test.sh +16 -0
  7. bfcl-irrelevance-123/environment/Dockerfile +12 -0
  8. bfcl-irrelevance-123/instruction.md +31 -0
  9. bfcl-irrelevance-123/solution/solve.sh +16 -0
  10. bfcl-irrelevance-123/task.toml +37 -0
  11. bfcl-irrelevance-123/tests/evaluate.py +178 -0
  12. bfcl-irrelevance-123/tests/test.sh +16 -0
  13. bfcl-irrelevance-13/environment/Dockerfile +12 -0
  14. bfcl-irrelevance-13/instruction.md +32 -0
  15. bfcl-irrelevance-13/solution/solve.sh +16 -0
  16. bfcl-irrelevance-13/task.toml +37 -0
  17. bfcl-irrelevance-13/tests/evaluate.py +178 -0
  18. bfcl-irrelevance-13/tests/test.sh +16 -0
  19. bfcl-irrelevance-15/environment/Dockerfile +12 -0
  20. bfcl-irrelevance-15/instruction.md +31 -0
  21. bfcl-irrelevance-15/solution/solve.sh +16 -0
  22. bfcl-irrelevance-15/task.toml +37 -0
  23. bfcl-irrelevance-15/tests/evaluate.py +178 -0
  24. bfcl-irrelevance-15/tests/test.sh +16 -0
  25. bfcl-irrelevance-154/environment/Dockerfile +12 -0
  26. bfcl-irrelevance-154/instruction.md +31 -0
  27. bfcl-irrelevance-154/solution/solve.sh +16 -0
  28. bfcl-irrelevance-154/task.toml +37 -0
  29. bfcl-irrelevance-154/tests/evaluate.py +178 -0
  30. bfcl-irrelevance-154/tests/test.sh +16 -0
  31. bfcl-irrelevance-161/environment/Dockerfile +12 -0
  32. bfcl-irrelevance-161/instruction.md +31 -0
  33. bfcl-irrelevance-161/solution/solve.sh +16 -0
  34. bfcl-irrelevance-161/task.toml +37 -0
  35. bfcl-irrelevance-161/tests/evaluate.py +178 -0
  36. bfcl-irrelevance-161/tests/test.sh +16 -0
  37. bfcl-irrelevance-3/environment/Dockerfile +12 -0
  38. bfcl-irrelevance-3/instruction.md +31 -0
  39. bfcl-irrelevance-3/solution/solve.sh +16 -0
  40. bfcl-irrelevance-3/task.toml +37 -0
  41. bfcl-irrelevance-3/tests/evaluate.py +178 -0
  42. bfcl-irrelevance-3/tests/test.sh +16 -0
  43. bfcl-irrelevance-53/environment/Dockerfile +12 -0
  44. bfcl-irrelevance-53/instruction.md +31 -0
  45. bfcl-irrelevance-53/solution/solve.sh +16 -0
  46. bfcl-irrelevance-53/task.toml +37 -0
  47. bfcl-irrelevance-53/tests/evaluate.py +178 -0
  48. bfcl-irrelevance-53/tests/test.sh +16 -0
  49. bfcl-live-irrelevance-103-3-0/environment/Dockerfile +12 -0
  50. bfcl-live-irrelevance-103-3-0/instruction.md +117 -0
bfcl-irrelevance-103/environment/Dockerfile ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.10-slim
2
+
3
+ WORKDIR /app
4
+
5
+ # Install required packages
6
+ RUN pip install --no-cache-dir pytest
7
+
8
+ # Copy task files
9
+ COPY . /app/
10
+
11
+ # Set up execution environment
12
+ ENV PYTHONUNBUFFERED=1
bfcl-irrelevance-103/instruction.md ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Task
2
+
3
+ What is the current time in New York City?
4
+
5
+ ## Available Functions
6
+
7
+ ### weather_forecast.get
8
+
9
+ **Description:** Retrieve the current weather forecast for a specific location.
10
+
11
+ **Parameters:**
12
+ - `location` (string, Required): The location you want to retrieve the weather for.
13
+ - `hour` (integer, Optional): The hour of the day in 24-hour format (optional). If not provided, the current hour will be used. Default: 24
14
+
15
+
16
+ ## Output
17
+
18
+ Analyze the request and determine the appropriate function call(s).
19
+ Write ONLY a JSON array to `/app/result.json`.
20
+
21
+ Format:
22
+ - If a function applies: `[{"function_name": {"param1": "value1"}}]`
23
+ - If no function applies: `[]`
24
+
25
+ Example:
26
+ ```bash
27
+ echo '[{"get_weather": {"city": "NYC"}}]' > /app/result.json
28
+ ```
29
+
30
+ IMPORTANT: You MUST execute the command to write the file.
bfcl-irrelevance-103/solution/solve.sh ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ set -e
3
+
4
+ # Oracle solution for BFCL task
5
+ # This script will be customized per task to output the ground truth function call
6
+
7
+ # The actual oracle answer will be inserted here during task generation
8
+ # Format: JSON array of function calls
9
+
10
+ # Write result to /app/result.json (Harbor standard location)
11
+ cat > /app/result.json << 'ORACLE_EOF'
12
+ []
13
+ ORACLE_EOF
14
+
15
+ echo "Oracle solution executed successfully"
16
+ echo "Result written to: /app/result.json"
bfcl-irrelevance-103/task.toml ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ version = "1.0"
2
+
3
+ [metadata]
4
+ source = "bfcl"
5
+ source_id = "irrelevance_103"
6
+ difficulty = "medium"
7
+ category = "function_calling"
8
+ tags = ["function-calling", "api", "bfcl-irrelevance"]
9
+
10
+ [description]
11
+ # This will be replaced by instruction.md content
12
+
13
+ [agent]
14
+ # Agent has 5 minutes to solve the task
15
+ timeout_sec = 300.0
16
+
17
+ [verifier]
18
+ # Verifier has 5 minutes to run tests
19
+ timeout_sec = 300.0
20
+
21
+ [environment]
22
+ # Docker environment settings
23
+ type = "docker"
24
+
25
+ [environment.resources]
26
+ # Resource limits for the task environment
27
+ cpus = 2.0
28
+ memory_mb = 4096
29
+ storage_mb = 10240
30
+
31
+ [solution]
32
+ # Oracle solution configuration
33
+ timeout_sec = 60.0
34
+
35
+ [tests]
36
+ # Test configuration
37
+ timeout_sec = 120.0
bfcl-irrelevance-103/tests/evaluate.py ADDED
@@ -0,0 +1,178 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ BFCL evaluation script for task: irrelevance_103
3
+
4
+ This script evaluates the agent's function calling output against ground truth.
5
+ """
6
+ import json
7
+ import sys
8
+ from pathlib import Path
9
+
10
+
11
+ def load_result():
12
+ """Load the result.json file generated by the agent."""
13
+ result_path = Path("/app/result.json")
14
+ if not result_path.exists():
15
+ raise FileNotFoundError("result.json not found. Agent must write output to /app/result.json")
16
+
17
+ with open(result_path, 'r') as f:
18
+ return json.load(f)
19
+
20
+
21
+ def normalize_function_name(name: str) -> str:
22
+ """Normalize function name by replacing dots with underscores."""
23
+ return name.replace('.', '_')
24
+
25
+
26
+ def compare_function_calls(predicted, ground_truth):
27
+ """
28
+ Compare predicted function calls against ground truth.
29
+
30
+ Args:
31
+ predicted: List of function call dicts from agent
32
+ ground_truth: List of acceptable function call dicts
33
+
34
+ Returns:
35
+ bool: True if prediction matches any acceptable ground truth
36
+ """
37
+ if not isinstance(predicted, list):
38
+ return False
39
+
40
+ if len(predicted) == 0 and len(ground_truth) == 0:
41
+ return True
42
+
43
+ if len(predicted) != len(ground_truth):
44
+ return False
45
+
46
+ # Handle both single and multiple function calls
47
+ # Compare each predicted call against corresponding ground truth
48
+ for i, pred_call in enumerate(predicted):
49
+ if not isinstance(pred_call, dict) or len(pred_call) != 1:
50
+ return False
51
+
52
+ pred_func_name = list(pred_call.keys())[0]
53
+ pred_params = pred_call[pred_func_name]
54
+ pred_func_name_norm = normalize_function_name(pred_func_name)
55
+
56
+ # Get corresponding ground truth call
57
+ if i >= len(ground_truth):
58
+ return False
59
+
60
+ gt_call = ground_truth[i]
61
+ if not isinstance(gt_call, dict) or len(gt_call) != 1:
62
+ return False
63
+
64
+ gt_func_name = list(gt_call.keys())[0]
65
+ gt_params = gt_call[gt_func_name]
66
+ gt_func_name_norm = normalize_function_name(gt_func_name)
67
+
68
+ # Check if function name matches
69
+ if pred_func_name_norm != gt_func_name_norm:
70
+ return False
71
+
72
+ # Check if parameters match
73
+ if not compare_parameters(pred_params, gt_params):
74
+ return False
75
+
76
+ return True
77
+
78
+
79
+ def compare_parameters(pred_params, gt_params):
80
+ """
81
+ Compare predicted parameters against ground truth parameters.
82
+
83
+ Ground truth may contain multiple acceptable values for each parameter.
84
+ """
85
+ if not isinstance(pred_params, dict) or not isinstance(gt_params, dict):
86
+ return False
87
+
88
+ # Check all ground truth parameters
89
+ for param_name, acceptable_values in gt_params.items():
90
+ if param_name not in pred_params:
91
+ # Parameter missing - check if it's in acceptable values (empty string means optional)
92
+ if "" not in acceptable_values and None not in acceptable_values:
93
+ return False
94
+ continue
95
+
96
+ pred_value = pred_params[param_name]
97
+
98
+ # Check if predicted value matches any acceptable value
99
+ if not isinstance(acceptable_values, list):
100
+ acceptable_values = [acceptable_values]
101
+
102
+ # Special case: if acceptable_values is empty list, check if pred_value is also empty list
103
+ if len(acceptable_values) == 0:
104
+ if pred_value != []:
105
+ return False
106
+ continue
107
+
108
+ matched = False
109
+ for acceptable_value in acceptable_values:
110
+ if values_equal(pred_value, acceptable_value):
111
+ matched = True
112
+ break
113
+
114
+ if not matched:
115
+ return False
116
+
117
+ return True
118
+
119
+
120
+ def values_equal(v1, v2):
121
+ """Check if two values are equal, handling type conversions."""
122
+ # Handle empty string as "not provided" or default
123
+ if v2 == "" or v2 is None:
124
+ return True
125
+
126
+ # Direct equality
127
+ if v1 == v2:
128
+ return True
129
+
130
+ # Try numeric comparison
131
+ try:
132
+ if float(v1) == float(v2):
133
+ return True
134
+ except (ValueError, TypeError):
135
+ pass
136
+
137
+ # Try string comparison
138
+ if str(v1).lower() == str(v2).lower():
139
+ return True
140
+
141
+ # Handle list/array comparison
142
+ if isinstance(v1, list) and isinstance(v2, list):
143
+ if len(v1) != len(v2):
144
+ return False
145
+ return all(values_equal(a, b) for a, b in zip(v1, v2))
146
+
147
+ return False
148
+
149
+
150
+ def main():
151
+ """Main evaluation function."""
152
+ # Load ground truth
153
+ ground_truth = []
154
+
155
+ try:
156
+ # Load agent's result
157
+ result = load_result()
158
+
159
+ # Compare against ground truth
160
+ if compare_function_calls(result, ground_truth):
161
+ print("✓ Test passed: Function call matches ground truth")
162
+ return 0
163
+ else:
164
+ print("✗ Test failed: Function call does not match ground truth")
165
+ print(f"Predicted: {result}")
166
+ print(f"Expected one of: {ground_truth}")
167
+ return 1
168
+
169
+ except Exception as e:
170
+ print(f"✗ Test failed with error: {e}")
171
+ import traceback
172
+ traceback.print_exc()
173
+ return 1
174
+
175
+
176
+ if __name__ == "__main__":
177
+ exit_code = main()
178
+ sys.exit(exit_code)
bfcl-irrelevance-103/tests/test.sh ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ # Run evaluation script and capture exit code
3
+
4
+ cd /tests
5
+ python3 /tests/evaluate.py
6
+ EXIT_CODE=$?
7
+
8
+ # Write reward based on exit code
9
+ if [ $EXIT_CODE -eq 0 ]; then
10
+ echo "1" > /logs/verifier/reward.txt
11
+ else
12
+ echo "0" > /logs/verifier/reward.txt
13
+ fi
14
+
15
+ echo "Test execution completed with exit code: $EXIT_CODE"
16
+ exit 0
bfcl-irrelevance-123/environment/Dockerfile ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.10-slim
2
+
3
+ WORKDIR /app
4
+
5
+ # Install required packages
6
+ RUN pip install --no-cache-dir pytest
7
+
8
+ # Copy task files
9
+ COPY . /app/
10
+
11
+ # Set up execution environment
12
+ ENV PYTHONUNBUFFERED=1
bfcl-irrelevance-123/instruction.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Task
2
+
3
+ What are the calories of a Big Mac?
4
+
5
+ ## Available Functions
6
+
7
+ ### calculate_bmi
8
+
9
+ **Description:** Calculate the Body Mass Index for a person based on their height and weight
10
+
11
+ **Parameters:**
12
+ - `height` (float, Required): The height of the person in meters.
13
+ - `weight` (float, Required): The weight of the person in kilograms.
14
+ - `unit` (string, Optional): The unit of measure. Defaults to metric units (kilograms/meters). Other option is imperial (pounds/inches).
15
+
16
+
17
+ ## Output
18
+
19
+ Analyze the request and determine the appropriate function call(s).
20
+ Write ONLY a JSON array to `/app/result.json`.
21
+
22
+ Format:
23
+ - If a function applies: `[{"function_name": {"param1": "value1"}}]`
24
+ - If no function applies: `[]`
25
+
26
+ Example:
27
+ ```bash
28
+ echo '[{"get_weather": {"city": "NYC"}}]' > /app/result.json
29
+ ```
30
+
31
+ IMPORTANT: You MUST execute the command to write the file.
bfcl-irrelevance-123/solution/solve.sh ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ set -e
3
+
4
+ # Oracle solution for BFCL task
5
+ # This script will be customized per task to output the ground truth function call
6
+
7
+ # The actual oracle answer will be inserted here during task generation
8
+ # Format: JSON array of function calls
9
+
10
+ # Write result to /app/result.json (Harbor standard location)
11
+ cat > /app/result.json << 'ORACLE_EOF'
12
+ []
13
+ ORACLE_EOF
14
+
15
+ echo "Oracle solution executed successfully"
16
+ echo "Result written to: /app/result.json"
bfcl-irrelevance-123/task.toml ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ version = "1.0"
2
+
3
+ [metadata]
4
+ source = "bfcl"
5
+ source_id = "irrelevance_123"
6
+ difficulty = "medium"
7
+ category = "function_calling"
8
+ tags = ["function-calling", "api", "bfcl-irrelevance"]
9
+
10
+ [description]
11
+ # This will be replaced by instruction.md content
12
+
13
+ [agent]
14
+ # Agent has 5 minutes to solve the task
15
+ timeout_sec = 300.0
16
+
17
+ [verifier]
18
+ # Verifier has 5 minutes to run tests
19
+ timeout_sec = 300.0
20
+
21
+ [environment]
22
+ # Docker environment settings
23
+ type = "docker"
24
+
25
+ [environment.resources]
26
+ # Resource limits for the task environment
27
+ cpus = 2.0
28
+ memory_mb = 4096
29
+ storage_mb = 10240
30
+
31
+ [solution]
32
+ # Oracle solution configuration
33
+ timeout_sec = 60.0
34
+
35
+ [tests]
36
+ # Test configuration
37
+ timeout_sec = 120.0
bfcl-irrelevance-123/tests/evaluate.py ADDED
@@ -0,0 +1,178 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ BFCL evaluation script for task: irrelevance_123
3
+
4
+ This script evaluates the agent's function calling output against ground truth.
5
+ """
6
+ import json
7
+ import sys
8
+ from pathlib import Path
9
+
10
+
11
+ def load_result():
12
+ """Load the result.json file generated by the agent."""
13
+ result_path = Path("/app/result.json")
14
+ if not result_path.exists():
15
+ raise FileNotFoundError("result.json not found. Agent must write output to /app/result.json")
16
+
17
+ with open(result_path, 'r') as f:
18
+ return json.load(f)
19
+
20
+
21
+ def normalize_function_name(name: str) -> str:
22
+ """Normalize function name by replacing dots with underscores."""
23
+ return name.replace('.', '_')
24
+
25
+
26
+ def compare_function_calls(predicted, ground_truth):
27
+ """
28
+ Compare predicted function calls against ground truth.
29
+
30
+ Args:
31
+ predicted: List of function call dicts from agent
32
+ ground_truth: List of acceptable function call dicts
33
+
34
+ Returns:
35
+ bool: True if prediction matches any acceptable ground truth
36
+ """
37
+ if not isinstance(predicted, list):
38
+ return False
39
+
40
+ if len(predicted) == 0 and len(ground_truth) == 0:
41
+ return True
42
+
43
+ if len(predicted) != len(ground_truth):
44
+ return False
45
+
46
+ # Handle both single and multiple function calls
47
+ # Compare each predicted call against corresponding ground truth
48
+ for i, pred_call in enumerate(predicted):
49
+ if not isinstance(pred_call, dict) or len(pred_call) != 1:
50
+ return False
51
+
52
+ pred_func_name = list(pred_call.keys())[0]
53
+ pred_params = pred_call[pred_func_name]
54
+ pred_func_name_norm = normalize_function_name(pred_func_name)
55
+
56
+ # Get corresponding ground truth call
57
+ if i >= len(ground_truth):
58
+ return False
59
+
60
+ gt_call = ground_truth[i]
61
+ if not isinstance(gt_call, dict) or len(gt_call) != 1:
62
+ return False
63
+
64
+ gt_func_name = list(gt_call.keys())[0]
65
+ gt_params = gt_call[gt_func_name]
66
+ gt_func_name_norm = normalize_function_name(gt_func_name)
67
+
68
+ # Check if function name matches
69
+ if pred_func_name_norm != gt_func_name_norm:
70
+ return False
71
+
72
+ # Check if parameters match
73
+ if not compare_parameters(pred_params, gt_params):
74
+ return False
75
+
76
+ return True
77
+
78
+
79
+ def compare_parameters(pred_params, gt_params):
80
+ """
81
+ Compare predicted parameters against ground truth parameters.
82
+
83
+ Ground truth may contain multiple acceptable values for each parameter.
84
+ """
85
+ if not isinstance(pred_params, dict) or not isinstance(gt_params, dict):
86
+ return False
87
+
88
+ # Check all ground truth parameters
89
+ for param_name, acceptable_values in gt_params.items():
90
+ if param_name not in pred_params:
91
+ # Parameter missing - check if it's in acceptable values (empty string means optional)
92
+ if "" not in acceptable_values and None not in acceptable_values:
93
+ return False
94
+ continue
95
+
96
+ pred_value = pred_params[param_name]
97
+
98
+ # Check if predicted value matches any acceptable value
99
+ if not isinstance(acceptable_values, list):
100
+ acceptable_values = [acceptable_values]
101
+
102
+ # Special case: if acceptable_values is empty list, check if pred_value is also empty list
103
+ if len(acceptable_values) == 0:
104
+ if pred_value != []:
105
+ return False
106
+ continue
107
+
108
+ matched = False
109
+ for acceptable_value in acceptable_values:
110
+ if values_equal(pred_value, acceptable_value):
111
+ matched = True
112
+ break
113
+
114
+ if not matched:
115
+ return False
116
+
117
+ return True
118
+
119
+
120
+ def values_equal(v1, v2):
121
+ """Check if two values are equal, handling type conversions."""
122
+ # Handle empty string as "not provided" or default
123
+ if v2 == "" or v2 is None:
124
+ return True
125
+
126
+ # Direct equality
127
+ if v1 == v2:
128
+ return True
129
+
130
+ # Try numeric comparison
131
+ try:
132
+ if float(v1) == float(v2):
133
+ return True
134
+ except (ValueError, TypeError):
135
+ pass
136
+
137
+ # Try string comparison
138
+ if str(v1).lower() == str(v2).lower():
139
+ return True
140
+
141
+ # Handle list/array comparison
142
+ if isinstance(v1, list) and isinstance(v2, list):
143
+ if len(v1) != len(v2):
144
+ return False
145
+ return all(values_equal(a, b) for a, b in zip(v1, v2))
146
+
147
+ return False
148
+
149
+
150
+ def main():
151
+ """Main evaluation function."""
152
+ # Load ground truth
153
+ ground_truth = []
154
+
155
+ try:
156
+ # Load agent's result
157
+ result = load_result()
158
+
159
+ # Compare against ground truth
160
+ if compare_function_calls(result, ground_truth):
161
+ print("✓ Test passed: Function call matches ground truth")
162
+ return 0
163
+ else:
164
+ print("✗ Test failed: Function call does not match ground truth")
165
+ print(f"Predicted: {result}")
166
+ print(f"Expected one of: {ground_truth}")
167
+ return 1
168
+
169
+ except Exception as e:
170
+ print(f"✗ Test failed with error: {e}")
171
+ import traceback
172
+ traceback.print_exc()
173
+ return 1
174
+
175
+
176
+ if __name__ == "__main__":
177
+ exit_code = main()
178
+ sys.exit(exit_code)
bfcl-irrelevance-123/tests/test.sh ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ # Run evaluation script and capture exit code
3
+
4
+ cd /tests
5
+ python3 /tests/evaluate.py
6
+ EXIT_CODE=$?
7
+
8
+ # Write reward based on exit code
9
+ if [ $EXIT_CODE -eq 0 ]; then
10
+ echo "1" > /logs/verifier/reward.txt
11
+ else
12
+ echo "0" > /logs/verifier/reward.txt
13
+ fi
14
+
15
+ echo "Test execution completed with exit code: $EXIT_CODE"
16
+ exit 0
bfcl-irrelevance-13/environment/Dockerfile ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.10-slim
2
+
3
+ WORKDIR /app
4
+
5
+ # Install required packages
6
+ RUN pip install --no-cache-dir pytest
7
+
8
+ # Copy task files
9
+ COPY . /app/
10
+
11
+ # Set up execution environment
12
+ ENV PYTHONUNBUFFERED=1
bfcl-irrelevance-13/instruction.md ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Task
2
+
3
+ Calculate the prime factors of 100.
4
+
5
+ ## Available Functions
6
+
7
+ ### calculate_compound_interest
8
+
9
+ **Description:** Calculate the compound interest for a given principal amount, rate, time and compounding frequency.
10
+
11
+ **Parameters:**
12
+ - `principal_amount` (float, Required): The initial amount of money that is loaned or invested.
13
+ - `annual_interest_rate` (float, Required): The annual interest rate as a decimal number. For example, an interest rate of 5% would be entered as 0.05.
14
+ - `compounding_periods_per_year` (integer, Required): The number of times that interest is compounded per year.
15
+ - `years` (integer, Required): The number of years the money is invested for.
16
+
17
+
18
+ ## Output
19
+
20
+ Analyze the request and determine the appropriate function call(s).
21
+ Write ONLY a JSON array to `/app/result.json`.
22
+
23
+ Format:
24
+ - If a function applies: `[{"function_name": {"param1": "value1"}}]`
25
+ - If no function applies: `[]`
26
+
27
+ Example:
28
+ ```bash
29
+ echo '[{"get_weather": {"city": "NYC"}}]' > /app/result.json
30
+ ```
31
+
32
+ IMPORTANT: You MUST execute the command to write the file.
bfcl-irrelevance-13/solution/solve.sh ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ set -e
3
+
4
+ # Oracle solution for BFCL task
5
+ # This script will be customized per task to output the ground truth function call
6
+
7
+ # The actual oracle answer will be inserted here during task generation
8
+ # Format: JSON array of function calls
9
+
10
+ # Write result to /app/result.json (Harbor standard location)
11
+ cat > /app/result.json << 'ORACLE_EOF'
12
+ []
13
+ ORACLE_EOF
14
+
15
+ echo "Oracle solution executed successfully"
16
+ echo "Result written to: /app/result.json"
bfcl-irrelevance-13/task.toml ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ version = "1.0"
2
+
3
+ [metadata]
4
+ source = "bfcl"
5
+ source_id = "irrelevance_13"
6
+ difficulty = "medium"
7
+ category = "function_calling"
8
+ tags = ["function-calling", "api", "bfcl-irrelevance"]
9
+
10
+ [description]
11
+ # This will be replaced by instruction.md content
12
+
13
+ [agent]
14
+ # Agent has 5 minutes to solve the task
15
+ timeout_sec = 300.0
16
+
17
+ [verifier]
18
+ # Verifier has 5 minutes to run tests
19
+ timeout_sec = 300.0
20
+
21
+ [environment]
22
+ # Docker environment settings
23
+ type = "docker"
24
+
25
+ [environment.resources]
26
+ # Resource limits for the task environment
27
+ cpus = 2.0
28
+ memory_mb = 4096
29
+ storage_mb = 10240
30
+
31
+ [solution]
32
+ # Oracle solution configuration
33
+ timeout_sec = 60.0
34
+
35
+ [tests]
36
+ # Test configuration
37
+ timeout_sec = 120.0
bfcl-irrelevance-13/tests/evaluate.py ADDED
@@ -0,0 +1,178 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ BFCL evaluation script for task: irrelevance_13
3
+
4
+ This script evaluates the agent's function calling output against ground truth.
5
+ """
6
+ import json
7
+ import sys
8
+ from pathlib import Path
9
+
10
+
11
+ def load_result():
12
+ """Load the result.json file generated by the agent."""
13
+ result_path = Path("/app/result.json")
14
+ if not result_path.exists():
15
+ raise FileNotFoundError("result.json not found. Agent must write output to /app/result.json")
16
+
17
+ with open(result_path, 'r') as f:
18
+ return json.load(f)
19
+
20
+
21
+ def normalize_function_name(name: str) -> str:
22
+ """Normalize function name by replacing dots with underscores."""
23
+ return name.replace('.', '_')
24
+
25
+
26
+ def compare_function_calls(predicted, ground_truth):
27
+ """
28
+ Compare predicted function calls against ground truth.
29
+
30
+ Args:
31
+ predicted: List of function call dicts from agent
32
+ ground_truth: List of acceptable function call dicts
33
+
34
+ Returns:
35
+ bool: True if prediction matches any acceptable ground truth
36
+ """
37
+ if not isinstance(predicted, list):
38
+ return False
39
+
40
+ if len(predicted) == 0 and len(ground_truth) == 0:
41
+ return True
42
+
43
+ if len(predicted) != len(ground_truth):
44
+ return False
45
+
46
+ # Handle both single and multiple function calls
47
+ # Compare each predicted call against corresponding ground truth
48
+ for i, pred_call in enumerate(predicted):
49
+ if not isinstance(pred_call, dict) or len(pred_call) != 1:
50
+ return False
51
+
52
+ pred_func_name = list(pred_call.keys())[0]
53
+ pred_params = pred_call[pred_func_name]
54
+ pred_func_name_norm = normalize_function_name(pred_func_name)
55
+
56
+ # Get corresponding ground truth call
57
+ if i >= len(ground_truth):
58
+ return False
59
+
60
+ gt_call = ground_truth[i]
61
+ if not isinstance(gt_call, dict) or len(gt_call) != 1:
62
+ return False
63
+
64
+ gt_func_name = list(gt_call.keys())[0]
65
+ gt_params = gt_call[gt_func_name]
66
+ gt_func_name_norm = normalize_function_name(gt_func_name)
67
+
68
+ # Check if function name matches
69
+ if pred_func_name_norm != gt_func_name_norm:
70
+ return False
71
+
72
+ # Check if parameters match
73
+ if not compare_parameters(pred_params, gt_params):
74
+ return False
75
+
76
+ return True
77
+
78
+
79
+ def compare_parameters(pred_params, gt_params):
80
+ """
81
+ Compare predicted parameters against ground truth parameters.
82
+
83
+ Ground truth may contain multiple acceptable values for each parameter.
84
+ """
85
+ if not isinstance(pred_params, dict) or not isinstance(gt_params, dict):
86
+ return False
87
+
88
+ # Check all ground truth parameters
89
+ for param_name, acceptable_values in gt_params.items():
90
+ if param_name not in pred_params:
91
+ # Parameter missing - check if it's in acceptable values (empty string means optional)
92
+ if "" not in acceptable_values and None not in acceptable_values:
93
+ return False
94
+ continue
95
+
96
+ pred_value = pred_params[param_name]
97
+
98
+ # Check if predicted value matches any acceptable value
99
+ if not isinstance(acceptable_values, list):
100
+ acceptable_values = [acceptable_values]
101
+
102
+ # Special case: if acceptable_values is empty list, check if pred_value is also empty list
103
+ if len(acceptable_values) == 0:
104
+ if pred_value != []:
105
+ return False
106
+ continue
107
+
108
+ matched = False
109
+ for acceptable_value in acceptable_values:
110
+ if values_equal(pred_value, acceptable_value):
111
+ matched = True
112
+ break
113
+
114
+ if not matched:
115
+ return False
116
+
117
+ return True
118
+
119
+
120
+ def values_equal(v1, v2):
121
+ """Check if two values are equal, handling type conversions."""
122
+ # Handle empty string as "not provided" or default
123
+ if v2 == "" or v2 is None:
124
+ return True
125
+
126
+ # Direct equality
127
+ if v1 == v2:
128
+ return True
129
+
130
+ # Try numeric comparison
131
+ try:
132
+ if float(v1) == float(v2):
133
+ return True
134
+ except (ValueError, TypeError):
135
+ pass
136
+
137
+ # Try string comparison
138
+ if str(v1).lower() == str(v2).lower():
139
+ return True
140
+
141
+ # Handle list/array comparison
142
+ if isinstance(v1, list) and isinstance(v2, list):
143
+ if len(v1) != len(v2):
144
+ return False
145
+ return all(values_equal(a, b) for a, b in zip(v1, v2))
146
+
147
+ return False
148
+
149
+
150
+ def main():
151
+ """Main evaluation function."""
152
+ # Load ground truth
153
+ ground_truth = []
154
+
155
+ try:
156
+ # Load agent's result
157
+ result = load_result()
158
+
159
+ # Compare against ground truth
160
+ if compare_function_calls(result, ground_truth):
161
+ print("✓ Test passed: Function call matches ground truth")
162
+ return 0
163
+ else:
164
+ print("✗ Test failed: Function call does not match ground truth")
165
+ print(f"Predicted: {result}")
166
+ print(f"Expected one of: {ground_truth}")
167
+ return 1
168
+
169
+ except Exception as e:
170
+ print(f"✗ Test failed with error: {e}")
171
+ import traceback
172
+ traceback.print_exc()
173
+ return 1
174
+
175
+
176
+ if __name__ == "__main__":
177
+ exit_code = main()
178
+ sys.exit(exit_code)
bfcl-irrelevance-13/tests/test.sh ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ # Run evaluation script and capture exit code
3
+
4
+ cd /tests
5
+ python3 /tests/evaluate.py
6
+ EXIT_CODE=$?
7
+
8
+ # Write reward based on exit code
9
+ if [ $EXIT_CODE -eq 0 ]; then
10
+ echo "1" > /logs/verifier/reward.txt
11
+ else
12
+ echo "0" > /logs/verifier/reward.txt
13
+ fi
14
+
15
+ echo "Test execution completed with exit code: $EXIT_CODE"
16
+ exit 0
bfcl-irrelevance-15/environment/Dockerfile ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.10-slim
2
+
3
+ WORKDIR /app
4
+
5
+ # Install required packages
6
+ RUN pip install --no-cache-dir pytest
7
+
8
+ # Copy task files
9
+ COPY . /app/
10
+
11
+ # Set up execution environment
12
+ ENV PYTHONUNBUFFERED=1
bfcl-irrelevance-15/instruction.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Task
2
+
3
+ What are the latest movie releases?
4
+
5
+ ## Available Functions
6
+
7
+ ### calculate_velocity
8
+
9
+ **Description:** Calculate the final velocity of an object in motion given its initial velocity, acceleration and time.
10
+
11
+ **Parameters:**
12
+ - `initial_velocity` (float, Required): The initial velocity of the object in m/s.
13
+ - `acceleration` (float, Required): The acceleration of the object in m/s^2.
14
+ - `time` (float, Required): The time for which the object is in motion in seconds.
15
+
16
+
17
+ ## Output
18
+
19
+ Analyze the request and determine the appropriate function call(s).
20
+ Write ONLY a JSON array to `/app/result.json`.
21
+
22
+ Format:
23
+ - If a function applies: `[{"function_name": {"param1": "value1"}}]`
24
+ - If no function applies: `[]`
25
+
26
+ Example:
27
+ ```bash
28
+ echo '[{"get_weather": {"city": "NYC"}}]' > /app/result.json
29
+ ```
30
+
31
+ IMPORTANT: You MUST execute the command to write the file.
bfcl-irrelevance-15/solution/solve.sh ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ set -e
3
+
4
+ # Oracle solution for BFCL task
5
+ # This script will be customized per task to output the ground truth function call
6
+
7
+ # The actual oracle answer will be inserted here during task generation
8
+ # Format: JSON array of function calls
9
+
10
+ # Write result to /app/result.json (Harbor standard location)
11
+ cat > /app/result.json << 'ORACLE_EOF'
12
+ []
13
+ ORACLE_EOF
14
+
15
+ echo "Oracle solution executed successfully"
16
+ echo "Result written to: /app/result.json"
bfcl-irrelevance-15/task.toml ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ version = "1.0"
2
+
3
+ [metadata]
4
+ source = "bfcl"
5
+ source_id = "irrelevance_15"
6
+ difficulty = "medium"
7
+ category = "function_calling"
8
+ tags = ["function-calling", "api", "bfcl-irrelevance"]
9
+
10
+ [description]
11
+ # This will be replaced by instruction.md content
12
+
13
+ [agent]
14
+ # Agent has 5 minutes to solve the task
15
+ timeout_sec = 300.0
16
+
17
+ [verifier]
18
+ # Verifier has 5 minutes to run tests
19
+ timeout_sec = 300.0
20
+
21
+ [environment]
22
+ # Docker environment settings
23
+ type = "docker"
24
+
25
+ [environment.resources]
26
+ # Resource limits for the task environment
27
+ cpus = 2.0
28
+ memory_mb = 4096
29
+ storage_mb = 10240
30
+
31
+ [solution]
32
+ # Oracle solution configuration
33
+ timeout_sec = 60.0
34
+
35
+ [tests]
36
+ # Test configuration
37
+ timeout_sec = 120.0
bfcl-irrelevance-15/tests/evaluate.py ADDED
@@ -0,0 +1,178 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ BFCL evaluation script for task: irrelevance_15
3
+
4
+ This script evaluates the agent's function calling output against ground truth.
5
+ """
6
+ import json
7
+ import sys
8
+ from pathlib import Path
9
+
10
+
11
+ def load_result():
12
+ """Load the result.json file generated by the agent."""
13
+ result_path = Path("/app/result.json")
14
+ if not result_path.exists():
15
+ raise FileNotFoundError("result.json not found. Agent must write output to /app/result.json")
16
+
17
+ with open(result_path, 'r') as f:
18
+ return json.load(f)
19
+
20
+
21
+ def normalize_function_name(name: str) -> str:
22
+ """Normalize function name by replacing dots with underscores."""
23
+ return name.replace('.', '_')
24
+
25
+
26
+ def compare_function_calls(predicted, ground_truth):
27
+ """
28
+ Compare predicted function calls against ground truth.
29
+
30
+ Args:
31
+ predicted: List of function call dicts from agent
32
+ ground_truth: List of acceptable function call dicts
33
+
34
+ Returns:
35
+ bool: True if prediction matches any acceptable ground truth
36
+ """
37
+ if not isinstance(predicted, list):
38
+ return False
39
+
40
+ if len(predicted) == 0 and len(ground_truth) == 0:
41
+ return True
42
+
43
+ if len(predicted) != len(ground_truth):
44
+ return False
45
+
46
+ # Handle both single and multiple function calls
47
+ # Compare each predicted call against corresponding ground truth
48
+ for i, pred_call in enumerate(predicted):
49
+ if not isinstance(pred_call, dict) or len(pred_call) != 1:
50
+ return False
51
+
52
+ pred_func_name = list(pred_call.keys())[0]
53
+ pred_params = pred_call[pred_func_name]
54
+ pred_func_name_norm = normalize_function_name(pred_func_name)
55
+
56
+ # Get corresponding ground truth call
57
+ if i >= len(ground_truth):
58
+ return False
59
+
60
+ gt_call = ground_truth[i]
61
+ if not isinstance(gt_call, dict) or len(gt_call) != 1:
62
+ return False
63
+
64
+ gt_func_name = list(gt_call.keys())[0]
65
+ gt_params = gt_call[gt_func_name]
66
+ gt_func_name_norm = normalize_function_name(gt_func_name)
67
+
68
+ # Check if function name matches
69
+ if pred_func_name_norm != gt_func_name_norm:
70
+ return False
71
+
72
+ # Check if parameters match
73
+ if not compare_parameters(pred_params, gt_params):
74
+ return False
75
+
76
+ return True
77
+
78
+
79
+ def compare_parameters(pred_params, gt_params):
80
+ """
81
+ Compare predicted parameters against ground truth parameters.
82
+
83
+ Ground truth may contain multiple acceptable values for each parameter.
84
+ """
85
+ if not isinstance(pred_params, dict) or not isinstance(gt_params, dict):
86
+ return False
87
+
88
+ # Check all ground truth parameters
89
+ for param_name, acceptable_values in gt_params.items():
90
+ if param_name not in pred_params:
91
+ # Parameter missing - check if it's in acceptable values (empty string means optional)
92
+ if "" not in acceptable_values and None not in acceptable_values:
93
+ return False
94
+ continue
95
+
96
+ pred_value = pred_params[param_name]
97
+
98
+ # Check if predicted value matches any acceptable value
99
+ if not isinstance(acceptable_values, list):
100
+ acceptable_values = [acceptable_values]
101
+
102
+ # Special case: if acceptable_values is empty list, check if pred_value is also empty list
103
+ if len(acceptable_values) == 0:
104
+ if pred_value != []:
105
+ return False
106
+ continue
107
+
108
+ matched = False
109
+ for acceptable_value in acceptable_values:
110
+ if values_equal(pred_value, acceptable_value):
111
+ matched = True
112
+ break
113
+
114
+ if not matched:
115
+ return False
116
+
117
+ return True
118
+
119
+
120
+ def values_equal(v1, v2):
121
+ """Check if two values are equal, handling type conversions."""
122
+ # Handle empty string as "not provided" or default
123
+ if v2 == "" or v2 is None:
124
+ return True
125
+
126
+ # Direct equality
127
+ if v1 == v2:
128
+ return True
129
+
130
+ # Try numeric comparison
131
+ try:
132
+ if float(v1) == float(v2):
133
+ return True
134
+ except (ValueError, TypeError):
135
+ pass
136
+
137
+ # Try string comparison
138
+ if str(v1).lower() == str(v2).lower():
139
+ return True
140
+
141
+ # Handle list/array comparison
142
+ if isinstance(v1, list) and isinstance(v2, list):
143
+ if len(v1) != len(v2):
144
+ return False
145
+ return all(values_equal(a, b) for a, b in zip(v1, v2))
146
+
147
+ return False
148
+
149
+
150
+ def main():
151
+ """Main evaluation function."""
152
+ # Load ground truth
153
+ ground_truth = []
154
+
155
+ try:
156
+ # Load agent's result
157
+ result = load_result()
158
+
159
+ # Compare against ground truth
160
+ if compare_function_calls(result, ground_truth):
161
+ print("✓ Test passed: Function call matches ground truth")
162
+ return 0
163
+ else:
164
+ print("✗ Test failed: Function call does not match ground truth")
165
+ print(f"Predicted: {result}")
166
+ print(f"Expected one of: {ground_truth}")
167
+ return 1
168
+
169
+ except Exception as e:
170
+ print(f"✗ Test failed with error: {e}")
171
+ import traceback
172
+ traceback.print_exc()
173
+ return 1
174
+
175
+
176
+ if __name__ == "__main__":
177
+ exit_code = main()
178
+ sys.exit(exit_code)
bfcl-irrelevance-15/tests/test.sh ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ # Run evaluation script and capture exit code
3
+
4
+ cd /tests
5
+ python3 /tests/evaluate.py
6
+ EXIT_CODE=$?
7
+
8
+ # Write reward based on exit code
9
+ if [ $EXIT_CODE -eq 0 ]; then
10
+ echo "1" > /logs/verifier/reward.txt
11
+ else
12
+ echo "0" > /logs/verifier/reward.txt
13
+ fi
14
+
15
+ echo "Test execution completed with exit code: $EXIT_CODE"
16
+ exit 0
bfcl-irrelevance-154/environment/Dockerfile ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.10-slim
2
+
3
+ WORKDIR /app
4
+
5
+ # Install required packages
6
+ RUN pip install --no-cache-dir pytest
7
+
8
+ # Copy task files
9
+ COPY . /app/
10
+
11
+ # Set up execution environment
12
+ ENV PYTHONUNBUFFERED=1
bfcl-irrelevance-154/instruction.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Task
2
+
3
+ What is the seating capacity of Camp Nou Stadium?
4
+
5
+ ## Available Functions
6
+
7
+ ### sculpture_info.find_creator
8
+
9
+ **Description:** Retrieve the creator of a sculpture based on the name.
10
+
11
+ **Parameters:**
12
+ - `sculpture_name` (string, Required): The name of the sculpture.
13
+ - `location` (string, Required): The location where the sculpture is displayed, if known.
14
+ - `year` (integer, Optional): The year the sculpture was created, if known.
15
+
16
+
17
+ ## Output
18
+
19
+ Analyze the request and determine the appropriate function call(s).
20
+ Write ONLY a JSON array to `/app/result.json`.
21
+
22
+ Format:
23
+ - If a function applies: `[{"function_name": {"param1": "value1"}}]`
24
+ - If no function applies: `[]`
25
+
26
+ Example:
27
+ ```bash
28
+ echo '[{"get_weather": {"city": "NYC"}}]' > /app/result.json
29
+ ```
30
+
31
+ IMPORTANT: You MUST execute the command to write the file.
bfcl-irrelevance-154/solution/solve.sh ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ set -e
3
+
4
+ # Oracle solution for BFCL task
5
+ # This script will be customized per task to output the ground truth function call
6
+
7
+ # The actual oracle answer will be inserted here during task generation
8
+ # Format: JSON array of function calls
9
+
10
+ # Write result to /app/result.json (Harbor standard location)
11
+ cat > /app/result.json << 'ORACLE_EOF'
12
+ []
13
+ ORACLE_EOF
14
+
15
+ echo "Oracle solution executed successfully"
16
+ echo "Result written to: /app/result.json"
bfcl-irrelevance-154/task.toml ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ version = "1.0"
2
+
3
+ [metadata]
4
+ source = "bfcl"
5
+ source_id = "irrelevance_154"
6
+ difficulty = "medium"
7
+ category = "function_calling"
8
+ tags = ["function-calling", "api", "bfcl-irrelevance"]
9
+
10
+ [description]
11
+ # This will be replaced by instruction.md content
12
+
13
+ [agent]
14
+ # Agent has 5 minutes to solve the task
15
+ timeout_sec = 300.0
16
+
17
+ [verifier]
18
+ # Verifier has 5 minutes to run tests
19
+ timeout_sec = 300.0
20
+
21
+ [environment]
22
+ # Docker environment settings
23
+ type = "docker"
24
+
25
+ [environment.resources]
26
+ # Resource limits for the task environment
27
+ cpus = 2.0
28
+ memory_mb = 4096
29
+ storage_mb = 10240
30
+
31
+ [solution]
32
+ # Oracle solution configuration
33
+ timeout_sec = 60.0
34
+
35
+ [tests]
36
+ # Test configuration
37
+ timeout_sec = 120.0
bfcl-irrelevance-154/tests/evaluate.py ADDED
@@ -0,0 +1,178 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ BFCL evaluation script for task: irrelevance_154
3
+
4
+ This script evaluates the agent's function calling output against ground truth.
5
+ """
6
+ import json
7
+ import sys
8
+ from pathlib import Path
9
+
10
+
11
+ def load_result():
12
+ """Load the result.json file generated by the agent."""
13
+ result_path = Path("/app/result.json")
14
+ if not result_path.exists():
15
+ raise FileNotFoundError("result.json not found. Agent must write output to /app/result.json")
16
+
17
+ with open(result_path, 'r') as f:
18
+ return json.load(f)
19
+
20
+
21
+ def normalize_function_name(name: str) -> str:
22
+ """Normalize function name by replacing dots with underscores."""
23
+ return name.replace('.', '_')
24
+
25
+
26
+ def compare_function_calls(predicted, ground_truth):
27
+ """
28
+ Compare predicted function calls against ground truth.
29
+
30
+ Args:
31
+ predicted: List of function call dicts from agent
32
+ ground_truth: List of acceptable function call dicts
33
+
34
+ Returns:
35
+ bool: True if prediction matches any acceptable ground truth
36
+ """
37
+ if not isinstance(predicted, list):
38
+ return False
39
+
40
+ if len(predicted) == 0 and len(ground_truth) == 0:
41
+ return True
42
+
43
+ if len(predicted) != len(ground_truth):
44
+ return False
45
+
46
+ # Handle both single and multiple function calls
47
+ # Compare each predicted call against corresponding ground truth
48
+ for i, pred_call in enumerate(predicted):
49
+ if not isinstance(pred_call, dict) or len(pred_call) != 1:
50
+ return False
51
+
52
+ pred_func_name = list(pred_call.keys())[0]
53
+ pred_params = pred_call[pred_func_name]
54
+ pred_func_name_norm = normalize_function_name(pred_func_name)
55
+
56
+ # Get corresponding ground truth call
57
+ if i >= len(ground_truth):
58
+ return False
59
+
60
+ gt_call = ground_truth[i]
61
+ if not isinstance(gt_call, dict) or len(gt_call) != 1:
62
+ return False
63
+
64
+ gt_func_name = list(gt_call.keys())[0]
65
+ gt_params = gt_call[gt_func_name]
66
+ gt_func_name_norm = normalize_function_name(gt_func_name)
67
+
68
+ # Check if function name matches
69
+ if pred_func_name_norm != gt_func_name_norm:
70
+ return False
71
+
72
+ # Check if parameters match
73
+ if not compare_parameters(pred_params, gt_params):
74
+ return False
75
+
76
+ return True
77
+
78
+
79
+ def compare_parameters(pred_params, gt_params):
80
+ """
81
+ Compare predicted parameters against ground truth parameters.
82
+
83
+ Ground truth may contain multiple acceptable values for each parameter.
84
+ """
85
+ if not isinstance(pred_params, dict) or not isinstance(gt_params, dict):
86
+ return False
87
+
88
+ # Check all ground truth parameters
89
+ for param_name, acceptable_values in gt_params.items():
90
+ if param_name not in pred_params:
91
+ # Parameter missing - check if it's in acceptable values (empty string means optional)
92
+ if "" not in acceptable_values and None not in acceptable_values:
93
+ return False
94
+ continue
95
+
96
+ pred_value = pred_params[param_name]
97
+
98
+ # Check if predicted value matches any acceptable value
99
+ if not isinstance(acceptable_values, list):
100
+ acceptable_values = [acceptable_values]
101
+
102
+ # Special case: if acceptable_values is empty list, check if pred_value is also empty list
103
+ if len(acceptable_values) == 0:
104
+ if pred_value != []:
105
+ return False
106
+ continue
107
+
108
+ matched = False
109
+ for acceptable_value in acceptable_values:
110
+ if values_equal(pred_value, acceptable_value):
111
+ matched = True
112
+ break
113
+
114
+ if not matched:
115
+ return False
116
+
117
+ return True
118
+
119
+
120
+ def values_equal(v1, v2):
121
+ """Check if two values are equal, handling type conversions."""
122
+ # Handle empty string as "not provided" or default
123
+ if v2 == "" or v2 is None:
124
+ return True
125
+
126
+ # Direct equality
127
+ if v1 == v2:
128
+ return True
129
+
130
+ # Try numeric comparison
131
+ try:
132
+ if float(v1) == float(v2):
133
+ return True
134
+ except (ValueError, TypeError):
135
+ pass
136
+
137
+ # Try string comparison
138
+ if str(v1).lower() == str(v2).lower():
139
+ return True
140
+
141
+ # Handle list/array comparison
142
+ if isinstance(v1, list) and isinstance(v2, list):
143
+ if len(v1) != len(v2):
144
+ return False
145
+ return all(values_equal(a, b) for a, b in zip(v1, v2))
146
+
147
+ return False
148
+
149
+
150
+ def main():
151
+ """Main evaluation function."""
152
+ # Load ground truth
153
+ ground_truth = []
154
+
155
+ try:
156
+ # Load agent's result
157
+ result = load_result()
158
+
159
+ # Compare against ground truth
160
+ if compare_function_calls(result, ground_truth):
161
+ print("✓ Test passed: Function call matches ground truth")
162
+ return 0
163
+ else:
164
+ print("✗ Test failed: Function call does not match ground truth")
165
+ print(f"Predicted: {result}")
166
+ print(f"Expected one of: {ground_truth}")
167
+ return 1
168
+
169
+ except Exception as e:
170
+ print(f"✗ Test failed with error: {e}")
171
+ import traceback
172
+ traceback.print_exc()
173
+ return 1
174
+
175
+
176
+ if __name__ == "__main__":
177
+ exit_code = main()
178
+ sys.exit(exit_code)
bfcl-irrelevance-154/tests/test.sh ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ # Run evaluation script and capture exit code
3
+
4
+ cd /tests
5
+ python3 /tests/evaluate.py
6
+ EXIT_CODE=$?
7
+
8
+ # Write reward based on exit code
9
+ if [ $EXIT_CODE -eq 0 ]; then
10
+ echo "1" > /logs/verifier/reward.txt
11
+ else
12
+ echo "0" > /logs/verifier/reward.txt
13
+ fi
14
+
15
+ echo "Test execution completed with exit code: $EXIT_CODE"
16
+ exit 0
bfcl-irrelevance-161/environment/Dockerfile ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.10-slim
2
+
3
+ WORKDIR /app
4
+
5
+ # Install required packages
6
+ RUN pip install --no-cache-dir pytest
7
+
8
+ # Copy task files
9
+ COPY . /app/
10
+
11
+ # Set up execution environment
12
+ ENV PYTHONUNBUFFERED=1
bfcl-irrelevance-161/instruction.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Task
2
+
3
+ What is the most visited market in New York?
4
+
5
+ ## Available Functions
6
+
7
+ ### museum_data.get_visit_stats
8
+
9
+ **Description:** Retrieve visitation statistics for museums.
10
+
11
+ **Parameters:**
12
+ - `city` (string, Required): The city where the museum is located.
13
+ - `year` (integer, Required): The year for which data is to be fetched.
14
+ - `month` (integer, Optional): The month for which data is to be fetched (Optional).
15
+
16
+
17
+ ## Output
18
+
19
+ Analyze the request and determine the appropriate function call(s).
20
+ Write ONLY a JSON array to `/app/result.json`.
21
+
22
+ Format:
23
+ - If a function applies: `[{"function_name": {"param1": "value1"}}]`
24
+ - If no function applies: `[]`
25
+
26
+ Example:
27
+ ```bash
28
+ echo '[{"get_weather": {"city": "NYC"}}]' > /app/result.json
29
+ ```
30
+
31
+ IMPORTANT: You MUST execute the command to write the file.
bfcl-irrelevance-161/solution/solve.sh ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ set -e
3
+
4
+ # Oracle solution for BFCL task
5
+ # This script will be customized per task to output the ground truth function call
6
+
7
+ # The actual oracle answer will be inserted here during task generation
8
+ # Format: JSON array of function calls
9
+
10
+ # Write result to /app/result.json (Harbor standard location)
11
+ cat > /app/result.json << 'ORACLE_EOF'
12
+ []
13
+ ORACLE_EOF
14
+
15
+ echo "Oracle solution executed successfully"
16
+ echo "Result written to: /app/result.json"
bfcl-irrelevance-161/task.toml ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ version = "1.0"
2
+
3
+ [metadata]
4
+ source = "bfcl"
5
+ source_id = "irrelevance_161"
6
+ difficulty = "medium"
7
+ category = "function_calling"
8
+ tags = ["function-calling", "api", "bfcl-irrelevance"]
9
+
10
+ [description]
11
+ # This will be replaced by instruction.md content
12
+
13
+ [agent]
14
+ # Agent has 5 minutes to solve the task
15
+ timeout_sec = 300.0
16
+
17
+ [verifier]
18
+ # Verifier has 5 minutes to run tests
19
+ timeout_sec = 300.0
20
+
21
+ [environment]
22
+ # Docker environment settings
23
+ type = "docker"
24
+
25
+ [environment.resources]
26
+ # Resource limits for the task environment
27
+ cpus = 2.0
28
+ memory_mb = 4096
29
+ storage_mb = 10240
30
+
31
+ [solution]
32
+ # Oracle solution configuration
33
+ timeout_sec = 60.0
34
+
35
+ [tests]
36
+ # Test configuration
37
+ timeout_sec = 120.0
bfcl-irrelevance-161/tests/evaluate.py ADDED
@@ -0,0 +1,178 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ BFCL evaluation script for task: irrelevance_161
3
+
4
+ This script evaluates the agent's function calling output against ground truth.
5
+ """
6
+ import json
7
+ import sys
8
+ from pathlib import Path
9
+
10
+
11
+ def load_result():
12
+ """Load the result.json file generated by the agent."""
13
+ result_path = Path("/app/result.json")
14
+ if not result_path.exists():
15
+ raise FileNotFoundError("result.json not found. Agent must write output to /app/result.json")
16
+
17
+ with open(result_path, 'r') as f:
18
+ return json.load(f)
19
+
20
+
21
+ def normalize_function_name(name: str) -> str:
22
+ """Normalize function name by replacing dots with underscores."""
23
+ return name.replace('.', '_')
24
+
25
+
26
+ def compare_function_calls(predicted, ground_truth):
27
+ """
28
+ Compare predicted function calls against ground truth.
29
+
30
+ Args:
31
+ predicted: List of function call dicts from agent
32
+ ground_truth: List of acceptable function call dicts
33
+
34
+ Returns:
35
+ bool: True if prediction matches any acceptable ground truth
36
+ """
37
+ if not isinstance(predicted, list):
38
+ return False
39
+
40
+ if len(predicted) == 0 and len(ground_truth) == 0:
41
+ return True
42
+
43
+ if len(predicted) != len(ground_truth):
44
+ return False
45
+
46
+ # Handle both single and multiple function calls
47
+ # Compare each predicted call against corresponding ground truth
48
+ for i, pred_call in enumerate(predicted):
49
+ if not isinstance(pred_call, dict) or len(pred_call) != 1:
50
+ return False
51
+
52
+ pred_func_name = list(pred_call.keys())[0]
53
+ pred_params = pred_call[pred_func_name]
54
+ pred_func_name_norm = normalize_function_name(pred_func_name)
55
+
56
+ # Get corresponding ground truth call
57
+ if i >= len(ground_truth):
58
+ return False
59
+
60
+ gt_call = ground_truth[i]
61
+ if not isinstance(gt_call, dict) or len(gt_call) != 1:
62
+ return False
63
+
64
+ gt_func_name = list(gt_call.keys())[0]
65
+ gt_params = gt_call[gt_func_name]
66
+ gt_func_name_norm = normalize_function_name(gt_func_name)
67
+
68
+ # Check if function name matches
69
+ if pred_func_name_norm != gt_func_name_norm:
70
+ return False
71
+
72
+ # Check if parameters match
73
+ if not compare_parameters(pred_params, gt_params):
74
+ return False
75
+
76
+ return True
77
+
78
+
79
+ def compare_parameters(pred_params, gt_params):
80
+ """
81
+ Compare predicted parameters against ground truth parameters.
82
+
83
+ Ground truth may contain multiple acceptable values for each parameter.
84
+ """
85
+ if not isinstance(pred_params, dict) or not isinstance(gt_params, dict):
86
+ return False
87
+
88
+ # Check all ground truth parameters
89
+ for param_name, acceptable_values in gt_params.items():
90
+ if param_name not in pred_params:
91
+ # Parameter missing - check if it's in acceptable values (empty string means optional)
92
+ if "" not in acceptable_values and None not in acceptable_values:
93
+ return False
94
+ continue
95
+
96
+ pred_value = pred_params[param_name]
97
+
98
+ # Check if predicted value matches any acceptable value
99
+ if not isinstance(acceptable_values, list):
100
+ acceptable_values = [acceptable_values]
101
+
102
+ # Special case: if acceptable_values is empty list, check if pred_value is also empty list
103
+ if len(acceptable_values) == 0:
104
+ if pred_value != []:
105
+ return False
106
+ continue
107
+
108
+ matched = False
109
+ for acceptable_value in acceptable_values:
110
+ if values_equal(pred_value, acceptable_value):
111
+ matched = True
112
+ break
113
+
114
+ if not matched:
115
+ return False
116
+
117
+ return True
118
+
119
+
120
+ def values_equal(v1, v2):
121
+ """Check if two values are equal, handling type conversions."""
122
+ # Handle empty string as "not provided" or default
123
+ if v2 == "" or v2 is None:
124
+ return True
125
+
126
+ # Direct equality
127
+ if v1 == v2:
128
+ return True
129
+
130
+ # Try numeric comparison
131
+ try:
132
+ if float(v1) == float(v2):
133
+ return True
134
+ except (ValueError, TypeError):
135
+ pass
136
+
137
+ # Try string comparison
138
+ if str(v1).lower() == str(v2).lower():
139
+ return True
140
+
141
+ # Handle list/array comparison
142
+ if isinstance(v1, list) and isinstance(v2, list):
143
+ if len(v1) != len(v2):
144
+ return False
145
+ return all(values_equal(a, b) for a, b in zip(v1, v2))
146
+
147
+ return False
148
+
149
+
150
+ def main():
151
+ """Main evaluation function."""
152
+ # Load ground truth
153
+ ground_truth = []
154
+
155
+ try:
156
+ # Load agent's result
157
+ result = load_result()
158
+
159
+ # Compare against ground truth
160
+ if compare_function_calls(result, ground_truth):
161
+ print("✓ Test passed: Function call matches ground truth")
162
+ return 0
163
+ else:
164
+ print("✗ Test failed: Function call does not match ground truth")
165
+ print(f"Predicted: {result}")
166
+ print(f"Expected one of: {ground_truth}")
167
+ return 1
168
+
169
+ except Exception as e:
170
+ print(f"✗ Test failed with error: {e}")
171
+ import traceback
172
+ traceback.print_exc()
173
+ return 1
174
+
175
+
176
+ if __name__ == "__main__":
177
+ exit_code = main()
178
+ sys.exit(exit_code)
bfcl-irrelevance-161/tests/test.sh ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ # Run evaluation script and capture exit code
3
+
4
+ cd /tests
5
+ python3 /tests/evaluate.py
6
+ EXIT_CODE=$?
7
+
8
+ # Write reward based on exit code
9
+ if [ $EXIT_CODE -eq 0 ]; then
10
+ echo "1" > /logs/verifier/reward.txt
11
+ else
12
+ echo "0" > /logs/verifier/reward.txt
13
+ fi
14
+
15
+ echo "Test execution completed with exit code: $EXIT_CODE"
16
+ exit 0
bfcl-irrelevance-3/environment/Dockerfile ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.10-slim
2
+
3
+ WORKDIR /app
4
+
5
+ # Install required packages
6
+ RUN pip install --no-cache-dir pytest
7
+
8
+ # Copy task files
9
+ COPY . /app/
10
+
11
+ # Set up execution environment
12
+ ENV PYTHONUNBUFFERED=1
bfcl-irrelevance-3/instruction.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Task
2
+
3
+ What is the slope of the line which is perpendicular to the line with the equation y = 3x + 2?
4
+
5
+ ## Available Functions
6
+
7
+ ### find_critical_points
8
+
9
+ **Description:** Finds the critical points of the function.
10
+
11
+ **Parameters:**
12
+ - `function` (string, Required): The function to find the critical points for.
13
+ - `variable` (string, Required): The variable in the function.
14
+ - `range` (array, Optional): The range to consider for finding critical points. Optional. Default is [0.0, 3.4].
15
+
16
+
17
+ ## Output
18
+
19
+ Analyze the request and determine the appropriate function call(s).
20
+ Write ONLY a JSON array to `/app/result.json`.
21
+
22
+ Format:
23
+ - If a function applies: `[{"function_name": {"param1": "value1"}}]`
24
+ - If no function applies: `[]`
25
+
26
+ Example:
27
+ ```bash
28
+ echo '[{"get_weather": {"city": "NYC"}}]' > /app/result.json
29
+ ```
30
+
31
+ IMPORTANT: You MUST execute the command to write the file.
bfcl-irrelevance-3/solution/solve.sh ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ set -e
3
+
4
+ # Oracle solution for BFCL task
5
+ # This script will be customized per task to output the ground truth function call
6
+
7
+ # The actual oracle answer will be inserted here during task generation
8
+ # Format: JSON array of function calls
9
+
10
+ # Write result to /app/result.json (Harbor standard location)
11
+ cat > /app/result.json << 'ORACLE_EOF'
12
+ []
13
+ ORACLE_EOF
14
+
15
+ echo "Oracle solution executed successfully"
16
+ echo "Result written to: /app/result.json"
bfcl-irrelevance-3/task.toml ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ version = "1.0"
2
+
3
+ [metadata]
4
+ source = "bfcl"
5
+ source_id = "irrelevance_3"
6
+ difficulty = "medium"
7
+ category = "function_calling"
8
+ tags = ["function-calling", "api", "bfcl-irrelevance"]
9
+
10
+ [description]
11
+ # This will be replaced by instruction.md content
12
+
13
+ [agent]
14
+ # Agent has 5 minutes to solve the task
15
+ timeout_sec = 300.0
16
+
17
+ [verifier]
18
+ # Verifier has 5 minutes to run tests
19
+ timeout_sec = 300.0
20
+
21
+ [environment]
22
+ # Docker environment settings
23
+ type = "docker"
24
+
25
+ [environment.resources]
26
+ # Resource limits for the task environment
27
+ cpus = 2.0
28
+ memory_mb = 4096
29
+ storage_mb = 10240
30
+
31
+ [solution]
32
+ # Oracle solution configuration
33
+ timeout_sec = 60.0
34
+
35
+ [tests]
36
+ # Test configuration
37
+ timeout_sec = 120.0
bfcl-irrelevance-3/tests/evaluate.py ADDED
@@ -0,0 +1,178 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ BFCL evaluation script for task: irrelevance_3
3
+
4
+ This script evaluates the agent's function calling output against ground truth.
5
+ """
6
+ import json
7
+ import sys
8
+ from pathlib import Path
9
+
10
+
11
+ def load_result():
12
+ """Load the result.json file generated by the agent."""
13
+ result_path = Path("/app/result.json")
14
+ if not result_path.exists():
15
+ raise FileNotFoundError("result.json not found. Agent must write output to /app/result.json")
16
+
17
+ with open(result_path, 'r') as f:
18
+ return json.load(f)
19
+
20
+
21
+ def normalize_function_name(name: str) -> str:
22
+ """Normalize function name by replacing dots with underscores."""
23
+ return name.replace('.', '_')
24
+
25
+
26
+ def compare_function_calls(predicted, ground_truth):
27
+ """
28
+ Compare predicted function calls against ground truth.
29
+
30
+ Args:
31
+ predicted: List of function call dicts from agent
32
+ ground_truth: List of acceptable function call dicts
33
+
34
+ Returns:
35
+ bool: True if prediction matches any acceptable ground truth
36
+ """
37
+ if not isinstance(predicted, list):
38
+ return False
39
+
40
+ if len(predicted) == 0 and len(ground_truth) == 0:
41
+ return True
42
+
43
+ if len(predicted) != len(ground_truth):
44
+ return False
45
+
46
+ # Handle both single and multiple function calls
47
+ # Compare each predicted call against corresponding ground truth
48
+ for i, pred_call in enumerate(predicted):
49
+ if not isinstance(pred_call, dict) or len(pred_call) != 1:
50
+ return False
51
+
52
+ pred_func_name = list(pred_call.keys())[0]
53
+ pred_params = pred_call[pred_func_name]
54
+ pred_func_name_norm = normalize_function_name(pred_func_name)
55
+
56
+ # Get corresponding ground truth call
57
+ if i >= len(ground_truth):
58
+ return False
59
+
60
+ gt_call = ground_truth[i]
61
+ if not isinstance(gt_call, dict) or len(gt_call) != 1:
62
+ return False
63
+
64
+ gt_func_name = list(gt_call.keys())[0]
65
+ gt_params = gt_call[gt_func_name]
66
+ gt_func_name_norm = normalize_function_name(gt_func_name)
67
+
68
+ # Check if function name matches
69
+ if pred_func_name_norm != gt_func_name_norm:
70
+ return False
71
+
72
+ # Check if parameters match
73
+ if not compare_parameters(pred_params, gt_params):
74
+ return False
75
+
76
+ return True
77
+
78
+
79
+ def compare_parameters(pred_params, gt_params):
80
+ """
81
+ Compare predicted parameters against ground truth parameters.
82
+
83
+ Ground truth may contain multiple acceptable values for each parameter.
84
+ """
85
+ if not isinstance(pred_params, dict) or not isinstance(gt_params, dict):
86
+ return False
87
+
88
+ # Check all ground truth parameters
89
+ for param_name, acceptable_values in gt_params.items():
90
+ if param_name not in pred_params:
91
+ # Parameter missing - check if it's in acceptable values (empty string means optional)
92
+ if "" not in acceptable_values and None not in acceptable_values:
93
+ return False
94
+ continue
95
+
96
+ pred_value = pred_params[param_name]
97
+
98
+ # Check if predicted value matches any acceptable value
99
+ if not isinstance(acceptable_values, list):
100
+ acceptable_values = [acceptable_values]
101
+
102
+ # Special case: if acceptable_values is empty list, check if pred_value is also empty list
103
+ if len(acceptable_values) == 0:
104
+ if pred_value != []:
105
+ return False
106
+ continue
107
+
108
+ matched = False
109
+ for acceptable_value in acceptable_values:
110
+ if values_equal(pred_value, acceptable_value):
111
+ matched = True
112
+ break
113
+
114
+ if not matched:
115
+ return False
116
+
117
+ return True
118
+
119
+
120
+ def values_equal(v1, v2):
121
+ """Check if two values are equal, handling type conversions."""
122
+ # Handle empty string as "not provided" or default
123
+ if v2 == "" or v2 is None:
124
+ return True
125
+
126
+ # Direct equality
127
+ if v1 == v2:
128
+ return True
129
+
130
+ # Try numeric comparison
131
+ try:
132
+ if float(v1) == float(v2):
133
+ return True
134
+ except (ValueError, TypeError):
135
+ pass
136
+
137
+ # Try string comparison
138
+ if str(v1).lower() == str(v2).lower():
139
+ return True
140
+
141
+ # Handle list/array comparison
142
+ if isinstance(v1, list) and isinstance(v2, list):
143
+ if len(v1) != len(v2):
144
+ return False
145
+ return all(values_equal(a, b) for a, b in zip(v1, v2))
146
+
147
+ return False
148
+
149
+
150
+ def main():
151
+ """Main evaluation function."""
152
+ # Load ground truth
153
+ ground_truth = []
154
+
155
+ try:
156
+ # Load agent's result
157
+ result = load_result()
158
+
159
+ # Compare against ground truth
160
+ if compare_function_calls(result, ground_truth):
161
+ print("✓ Test passed: Function call matches ground truth")
162
+ return 0
163
+ else:
164
+ print("✗ Test failed: Function call does not match ground truth")
165
+ print(f"Predicted: {result}")
166
+ print(f"Expected one of: {ground_truth}")
167
+ return 1
168
+
169
+ except Exception as e:
170
+ print(f"✗ Test failed with error: {e}")
171
+ import traceback
172
+ traceback.print_exc()
173
+ return 1
174
+
175
+
176
+ if __name__ == "__main__":
177
+ exit_code = main()
178
+ sys.exit(exit_code)
bfcl-irrelevance-3/tests/test.sh ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ # Run evaluation script and capture exit code
3
+
4
+ cd /tests
5
+ python3 /tests/evaluate.py
6
+ EXIT_CODE=$?
7
+
8
+ # Write reward based on exit code
9
+ if [ $EXIT_CODE -eq 0 ]; then
10
+ echo "1" > /logs/verifier/reward.txt
11
+ else
12
+ echo "0" > /logs/verifier/reward.txt
13
+ fi
14
+
15
+ echo "Test execution completed with exit code: $EXIT_CODE"
16
+ exit 0
bfcl-irrelevance-53/environment/Dockerfile ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.10-slim
2
+
3
+ WORKDIR /app
4
+
5
+ # Install required packages
6
+ RUN pip install --no-cache-dir pytest
7
+
8
+ # Copy task files
9
+ COPY . /app/
10
+
11
+ # Set up execution environment
12
+ ENV PYTHONUNBUFFERED=1
bfcl-irrelevance-53/instruction.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Task
2
+
3
+ Who won the world series in 2018?
4
+
5
+ ## Available Functions
6
+
7
+ ### database_query.run
8
+
9
+ **Description:** Run a query on a SQL database.
10
+
11
+ **Parameters:**
12
+ - `database` (string, Required): The name of the database.
13
+ - `query` (string, Required): The SQL query to run.
14
+ - `connect_credentials` (dict, Optional): Optional field. A dictionary of credentials to connect to the database if needed.
15
+
16
+
17
+ ## Output
18
+
19
+ Analyze the request and determine the appropriate function call(s).
20
+ Write ONLY a JSON array to `/app/result.json`.
21
+
22
+ Format:
23
+ - If a function applies: `[{"function_name": {"param1": "value1"}}]`
24
+ - If no function applies: `[]`
25
+
26
+ Example:
27
+ ```bash
28
+ echo '[{"get_weather": {"city": "NYC"}}]' > /app/result.json
29
+ ```
30
+
31
+ IMPORTANT: You MUST execute the command to write the file.
bfcl-irrelevance-53/solution/solve.sh ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ set -e
3
+
4
+ # Oracle solution for BFCL task
5
+ # This script will be customized per task to output the ground truth function call
6
+
7
+ # The actual oracle answer will be inserted here during task generation
8
+ # Format: JSON array of function calls
9
+
10
+ # Write result to /app/result.json (Harbor standard location)
11
+ cat > /app/result.json << 'ORACLE_EOF'
12
+ []
13
+ ORACLE_EOF
14
+
15
+ echo "Oracle solution executed successfully"
16
+ echo "Result written to: /app/result.json"
bfcl-irrelevance-53/task.toml ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ version = "1.0"
2
+
3
+ [metadata]
4
+ source = "bfcl"
5
+ source_id = "irrelevance_53"
6
+ difficulty = "medium"
7
+ category = "function_calling"
8
+ tags = ["function-calling", "api", "bfcl-irrelevance"]
9
+
10
+ [description]
11
+ # This will be replaced by instruction.md content
12
+
13
+ [agent]
14
+ # Agent has 5 minutes to solve the task
15
+ timeout_sec = 300.0
16
+
17
+ [verifier]
18
+ # Verifier has 5 minutes to run tests
19
+ timeout_sec = 300.0
20
+
21
+ [environment]
22
+ # Docker environment settings
23
+ type = "docker"
24
+
25
+ [environment.resources]
26
+ # Resource limits for the task environment
27
+ cpus = 2.0
28
+ memory_mb = 4096
29
+ storage_mb = 10240
30
+
31
+ [solution]
32
+ # Oracle solution configuration
33
+ timeout_sec = 60.0
34
+
35
+ [tests]
36
+ # Test configuration
37
+ timeout_sec = 120.0
bfcl-irrelevance-53/tests/evaluate.py ADDED
@@ -0,0 +1,178 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ BFCL evaluation script for task: irrelevance_53
3
+
4
+ This script evaluates the agent's function calling output against ground truth.
5
+ """
6
+ import json
7
+ import sys
8
+ from pathlib import Path
9
+
10
+
11
+ def load_result():
12
+ """Load the result.json file generated by the agent."""
13
+ result_path = Path("/app/result.json")
14
+ if not result_path.exists():
15
+ raise FileNotFoundError("result.json not found. Agent must write output to /app/result.json")
16
+
17
+ with open(result_path, 'r') as f:
18
+ return json.load(f)
19
+
20
+
21
+ def normalize_function_name(name: str) -> str:
22
+ """Normalize function name by replacing dots with underscores."""
23
+ return name.replace('.', '_')
24
+
25
+
26
+ def compare_function_calls(predicted, ground_truth):
27
+ """
28
+ Compare predicted function calls against ground truth.
29
+
30
+ Args:
31
+ predicted: List of function call dicts from agent
32
+ ground_truth: List of acceptable function call dicts
33
+
34
+ Returns:
35
+ bool: True if prediction matches any acceptable ground truth
36
+ """
37
+ if not isinstance(predicted, list):
38
+ return False
39
+
40
+ if len(predicted) == 0 and len(ground_truth) == 0:
41
+ return True
42
+
43
+ if len(predicted) != len(ground_truth):
44
+ return False
45
+
46
+ # Handle both single and multiple function calls
47
+ # Compare each predicted call against corresponding ground truth
48
+ for i, pred_call in enumerate(predicted):
49
+ if not isinstance(pred_call, dict) or len(pred_call) != 1:
50
+ return False
51
+
52
+ pred_func_name = list(pred_call.keys())[0]
53
+ pred_params = pred_call[pred_func_name]
54
+ pred_func_name_norm = normalize_function_name(pred_func_name)
55
+
56
+ # Get corresponding ground truth call
57
+ if i >= len(ground_truth):
58
+ return False
59
+
60
+ gt_call = ground_truth[i]
61
+ if not isinstance(gt_call, dict) or len(gt_call) != 1:
62
+ return False
63
+
64
+ gt_func_name = list(gt_call.keys())[0]
65
+ gt_params = gt_call[gt_func_name]
66
+ gt_func_name_norm = normalize_function_name(gt_func_name)
67
+
68
+ # Check if function name matches
69
+ if pred_func_name_norm != gt_func_name_norm:
70
+ return False
71
+
72
+ # Check if parameters match
73
+ if not compare_parameters(pred_params, gt_params):
74
+ return False
75
+
76
+ return True
77
+
78
+
79
+ def compare_parameters(pred_params, gt_params):
80
+ """
81
+ Compare predicted parameters against ground truth parameters.
82
+
83
+ Ground truth may contain multiple acceptable values for each parameter.
84
+ """
85
+ if not isinstance(pred_params, dict) or not isinstance(gt_params, dict):
86
+ return False
87
+
88
+ # Check all ground truth parameters
89
+ for param_name, acceptable_values in gt_params.items():
90
+ if param_name not in pred_params:
91
+ # Parameter missing - check if it's in acceptable values (empty string means optional)
92
+ if "" not in acceptable_values and None not in acceptable_values:
93
+ return False
94
+ continue
95
+
96
+ pred_value = pred_params[param_name]
97
+
98
+ # Check if predicted value matches any acceptable value
99
+ if not isinstance(acceptable_values, list):
100
+ acceptable_values = [acceptable_values]
101
+
102
+ # Special case: if acceptable_values is empty list, check if pred_value is also empty list
103
+ if len(acceptable_values) == 0:
104
+ if pred_value != []:
105
+ return False
106
+ continue
107
+
108
+ matched = False
109
+ for acceptable_value in acceptable_values:
110
+ if values_equal(pred_value, acceptable_value):
111
+ matched = True
112
+ break
113
+
114
+ if not matched:
115
+ return False
116
+
117
+ return True
118
+
119
+
120
+ def values_equal(v1, v2):
121
+ """Check if two values are equal, handling type conversions."""
122
+ # Handle empty string as "not provided" or default
123
+ if v2 == "" or v2 is None:
124
+ return True
125
+
126
+ # Direct equality
127
+ if v1 == v2:
128
+ return True
129
+
130
+ # Try numeric comparison
131
+ try:
132
+ if float(v1) == float(v2):
133
+ return True
134
+ except (ValueError, TypeError):
135
+ pass
136
+
137
+ # Try string comparison
138
+ if str(v1).lower() == str(v2).lower():
139
+ return True
140
+
141
+ # Handle list/array comparison
142
+ if isinstance(v1, list) and isinstance(v2, list):
143
+ if len(v1) != len(v2):
144
+ return False
145
+ return all(values_equal(a, b) for a, b in zip(v1, v2))
146
+
147
+ return False
148
+
149
+
150
+ def main():
151
+ """Main evaluation function."""
152
+ # Load ground truth
153
+ ground_truth = []
154
+
155
+ try:
156
+ # Load agent's result
157
+ result = load_result()
158
+
159
+ # Compare against ground truth
160
+ if compare_function_calls(result, ground_truth):
161
+ print("✓ Test passed: Function call matches ground truth")
162
+ return 0
163
+ else:
164
+ print("✗ Test failed: Function call does not match ground truth")
165
+ print(f"Predicted: {result}")
166
+ print(f"Expected one of: {ground_truth}")
167
+ return 1
168
+
169
+ except Exception as e:
170
+ print(f"✗ Test failed with error: {e}")
171
+ import traceback
172
+ traceback.print_exc()
173
+ return 1
174
+
175
+
176
+ if __name__ == "__main__":
177
+ exit_code = main()
178
+ sys.exit(exit_code)
bfcl-irrelevance-53/tests/test.sh ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ # Run evaluation script and capture exit code
3
+
4
+ cd /tests
5
+ python3 /tests/evaluate.py
6
+ EXIT_CODE=$?
7
+
8
+ # Write reward based on exit code
9
+ if [ $EXIT_CODE -eq 0 ]; then
10
+ echo "1" > /logs/verifier/reward.txt
11
+ else
12
+ echo "0" > /logs/verifier/reward.txt
13
+ fi
14
+
15
+ echo "Test execution completed with exit code: $EXIT_CODE"
16
+ exit 0
bfcl-live-irrelevance-103-3-0/environment/Dockerfile ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.10-slim
2
+
3
+ WORKDIR /app
4
+
5
+ # Install required packages
6
+ RUN pip install --no-cache-dir pytest
7
+
8
+ # Copy task files
9
+ COPY . /app/
10
+
11
+ # Set up execution environment
12
+ ENV PYTHONUNBUFFERED=1
bfcl-live-irrelevance-103-3-0/instruction.md ADDED
@@ -0,0 +1,117 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Task
2
+
3
+ You are an agent that works for SecRun and is used to understand the users prompt and use your tools on their behalf based on what they want.
4
+
5
+ ## Available Functions
6
+
7
+ ### ChaScr
8
+
9
+ **Description:** Modifies the Lua source code associated with a given key state. This modification is applied within a Roblox game server environment. The source code must adhere to Lua syntax and the character count must not exceed the defined limit.
10
+
11
+ **Parameters:**
12
+ - `TheKey` (string, Required): The key for which the Lua source code is to be modified. The key length should be between 10 and 50 characters.
13
+ - `TheScr` (string, Required): The Lua source code to be associated with the key. The code should not exceed 5,000,000 characters. If the requirement is conceptual, the code will be crafted based on the provided description using the Luau language.
14
+ - `TheKey2` (string, Required): An additional key to be generated, with a length requirement between 10 and 50 characters.
15
+
16
+ ### KeyStat
17
+
18
+ **Description:** Retrieves a key statistic that represents encrypted content. This statistic should be decrypted using a compatible decryption tool.
19
+
20
+ **Parameters:**
21
+ - `TheKey` (string, Required): The initial key previously generated by the system or the user, used for encryption.
22
+ - `TheKey2` (string, Required): An auxiliary key that was generated alongside the initial key, also used for encryption.
23
+
24
+ ### ReqScr.generate_key
25
+
26
+ **Description:** Generates a pair of unique keys specific to the require script, intended for user execution within a game environment.
27
+
28
+ **Parameters:**
29
+ - `TheLen` (integer, Required): The length of the first key to be generated. It must be an integer greater than 9 and less than 51.
30
+ - `TheLen2` (integer, Required): The length of the second key to be generated. It must be an integer greater than 9 and less than 51.
31
+
32
+ ### ReqSav
33
+
34
+ **Description:** Generates a unique key for the spaces feature, which allows users to store their source code securely. The key's length is specified by the user.
35
+
36
+ **Parameters:**
37
+ - `TheLen` (integer, Required): Specifies the length of the generated key. It must be an integer greater than 9 and less than 51.
38
+
39
+ ### SavScr
40
+
41
+ **Description:** Saves a Lua script to a specific feature called 'spaces', which is used for storing source code. The function only accepts Lua source code.
42
+
43
+ **Parameters:**
44
+ - `TheKey` (string, Required): A unique identifier for the script, previously generated by the user.
45
+ - `TheNam` (string, Required): The name under which the script will be saved in the 'spaces' file system.
46
+ - `TheScr` (string, Required): The Lua source code to be saved. The code must not exceed 5,000,000 characters.
47
+
48
+ ### ReaScr
49
+
50
+ **Description:** Reads a script from a specific feature called 'spaces', which is designed for storing source code.
51
+
52
+ **Parameters:**
53
+ - `TheKey` (string, Required): The unique key associated with the user or session, used for authentication and access control.
54
+ - `TheNam` (string, Required): The name identifier for the script previously stored in the 'spaces' file system.
55
+
56
+ ### list_scripts
57
+
58
+ **Description:** Lists all the scripts associated with the 'spaces' feature, which is used for storing and managing source code.
59
+
60
+ **Parameters:**
61
+ - `TheKey` (string, Required): The access key that uniquely identifies the user's session and authorizes the listing of scripts.
62
+
63
+ ### DelScr
64
+
65
+ **Description:** Deletes a specified script from the 'spaces' feature, which serves as a repository for storing source code.
66
+
67
+ **Parameters:**
68
+ - `TheKey` (string, Required): The unique authentication key generated during initial setup or by the user to authenticate the script deletion request.
69
+ - `TheNam` (string, Required): The exact name of the script to be removed from the 'spaces' file system.
70
+
71
+ ### DelPerm
72
+
73
+ **Description:** Deletes all workspaces used for storing source code. To ensure security, only developers with the specific administrator key, provided by the SecRun team, can perform this operation.
74
+
75
+ **Parameters:**
76
+ - `TheKey` (string, Required): The unique administrator key provided by the SecRun developers for secure access.
77
+
78
+ ### DelTemp
79
+
80
+ **Description:** Deletes all temporary keys which were used to store key statistics for the required script. This function is intended for use by SecRun developers who possess the authorized administrative key.
81
+
82
+ **Parameters:**
83
+ - `TheKey` (string, Required): The administrative key provided by a SecRun developer for authorized use.
84
+
85
+ ### end_request
86
+
87
+ **Description:** Terminates a request session and performs necessary cleanup operations.
88
+
89
+ **Parameters:**
90
+ - `session_id` (string, Required): The unique identifier for the request session to be terminated.
91
+ - `force_close` (boolean, Optional): Indicates whether the session should be forcibly terminated.
92
+
93
+ ### DecEnv
94
+
95
+ **Description:** Decrypts an encrypted table that was previously used for storing essential state information required by a script.
96
+
97
+ **Parameters:**
98
+ - `TheKey` (string, Required): The primary encryption key, initially generated by the system or the user.
99
+ - `TheKey2` (string, Required): The secondary encryption key, initially generated following the primary key creation.
100
+ - `TheEnv` (dict, Required): The encrypted data structure containing the key-value pairs of state information.
101
+
102
+
103
+ ## Output
104
+
105
+ Analyze the request and determine the appropriate function call(s).
106
+ Write ONLY a JSON array to `/app/result.json`.
107
+
108
+ Format:
109
+ - If a function applies: `[{"function_name": {"param1": "value1"}}]`
110
+ - If no function applies: `[]`
111
+
112
+ Example:
113
+ ```bash
114
+ echo '[{"get_weather": {"city": "NYC"}}]' > /app/result.json
115
+ ```
116
+
117
+ IMPORTANT: You MUST execute the command to write the file.