task stringlengths 22 52 | category stringclasses 7 values | research_problem stringlengths 14 29 | dataset stringlengths 12 40 | metric stringclasses 9 values | metadata.yaml stringlengths 954 1.15k | project_description.md stringlengths 2.33k 26.1k | prepare.py stringlengths 3.09k 4.78k | evaluate_prepare.py stringlengths 2.18k 3.84k | evaluate.py stringlengths 1.67k 4.81k | custom_labels.py stringlengths 1.45k 5.36k ⌀ | utils.py stringclasses 5 values |
|---|---|---|---|---|---|---|---|---|---|---|---|
CodeGenerationAPPSPassAt5 | Code | Code Generation | codeparrot/apps | Pass@5 | metric_lower_is_better: false
file_export_globs:
- submission.csv
container_python_requirements:
- datasets==4.0.0
evaluate_container_python_requirements:
- datasets==4.0.0
- torchmetrics
- pandas
- numpy
- torch
- pyext
prepare_code_python_requirements:
- pyext
logging_info:
name: CodeGenerationAPPSPassAt5
category: Code
research_problem: Code Generation
output_type: Text Generation
dataset: codeparrot/apps
config: all
metric: Pass@5
input_columns:
- question
- starter_code
scoring_column: input_output
train_split: train
test_split: test
shape:
- 5000
- 5
custom_gold_labels: true
custom_rad_class: false
sota:
- sota_paper_title: 'Code Llama: Open Foundation Models for Code'
sota_paper_url: https://arxiv.org/pdf/2308.12950
sota_score: 0.187
sota_year: 2023
sota_venue: Model technical report
dataset_paper_url: https://arxiv.org/abs/2105.09938
estimated_worst_score: 0.0
optimal_score: 1.0
| # Overview
## Task Description
This is a Machine Learning project and your goal is to build a model that solves the project's TASK following the instructions provided below.
TASK: Your task is to generate five independent Python programs for each competitive-programming problem. Each program must read from standard input and write to standard output and is intended to solve the given problem completely. A prediction for a problem is considered correct if at least one of the five submitted programs passes all of the official hidden test cases. Your predictions will be scored using the Pass@5 metric, which measures the fraction of test problems solved by at least one of the five attempts.
## Data
### Dataset Structure
The default config of the APPS dataset has the following structure.
Here is a description of the contents of each column including their name, what they contain, and the data type:
```
{
"problem_id": int, // Unique identifier of the problem
"question": string, // Full natural-language statement of the problem
"solutions": string, // JSON-encoded list of known correct Python solutions; may be an empty string if none
"input_output": string, // JSON-encoded dictionary of official test data; keys:
// "inputs": list of input strings,
// "outputs": list of expected output strings,
// "fn_name": (optional) required function name
"difficulty": string, // Difficulty label: "introductory", "interview", or "competition"
"url": string, // Source URL of the problem statement
"starter_code": string // (Optional) starter template code; may be empty
}
```
> Note: Only the `question` and `starter_code` columns are provided in the test data. Ensure you preserve the original row order of the test set when producing predictions.
Train data can be loaded using:
```python
from datasets import load_from_disk
train_dataset = load_from_disk('./data/train')
```
Test data can be loaded using:
```python
from datasets import load_from_disk
test_dataset = load_from_disk('./data/test')
```
### Submission file
The submission file should contain the selected answers for the test set. The submission file should be a csv file named `submission.csv` with the following header:
```
code1,code2,code3,code4,code5
```
and shape `(5000, 5)` - one row per problem and five Python programs per row, aligned with the test set order.
Each cell must contain valid Python source code as a single string. Do not include any extra columns or indices.
### Evaluation
The evaluation will be performed on the `submission.csv` file you submit, using the **Pass@5** metric.
For each problem, all 5 submitted solutions are executed against **all test cases** (both public and private).
A problem counts as correct if **at least one** of the 5 submissions passes **every** test case.
---
#### Execution Environment
* **Python version**: All code is executed with **Python 3.10**. Ensure compatibility with this version.
---
#### Resource Limits
* **Time limit** – A fixed limit of about 4 seconds is applied to each test run using Python’s `signal.alarm`.
* **Memory limit** – Is set to the default operating-system memory limit.
If the program exceeds the time limit or any configured memory cap, it is interrupted and the test case fails.
---
#### Sandbox Restrictions (enforced automatically)
Submitted code runs with selected built-in operations disabled:
* **No process creation or shell commands**
Functions such as `os.system`, `os.fork`, `os.kill`, and `subprocess.Popen` are replaced with harmless stubs.
* **Limited file-system operations**
Destructive functions like `os.remove`, `shutil.rmtree`, and `os.rename` are disabled.
* **Partial environment hardening**
Certain environment-changing operations (e.g. `os.putenv`, `os.chdir`) and some optional modules are disabled.
These protections mainly work by setting restricted functions to `None`.
Attempting to call them typically raises a `TypeError`, causing the test to fail.
---
#### Code Safety Guidelines
* Write code that **does not attempt** to access the internet, execute system commands, or write to arbitrary files.
* Any such behavior may lead to **disqualification** even if it does not trigger runtime errors.
---
This process ensures consistent, resource-aware, and fair evaluation of all submitted solutions.
#### Evaluation Script
Evaluation will be done using the following script:
```py
import argparse, json, sys
import pandas as pd
from datasets import load_from_disk
import json
import multiprocessing
import numpy as np
from tqdm import tqdm
from concurrent.futures import ThreadPoolExecutor, as_completed
import faulthandler
# used for debugging to time steps
from datetime import datetime
# to run the solution files we're using a timing based approach
import signal
# for capturing the stdout
from io import StringIO
# used for testing the code that reads from input
from unittest.mock import patch, mock_open
from pyext import RuntimeModule
from enum import Enum
class CODE_TYPE(Enum):
call_based = 0
standard_input = 1
# stuff for setting up signal timer
class TimeoutException(Exception):
pass
def timeout_handler(signum, frame):
print("alarm went off")
#return
raise TimeoutException
signal.signal(signal.SIGALRM, timeout_handler)
timeout = 4 # seconds
# used to capture stdout as a list
# from https://stackoverflow.com/a/16571630/6416660
# alternative use redirect_stdout() from contextlib
class Capturing(list):
def __enter__(self):
self._stdout = sys.stdout
sys.stdout = self._stringio = StringIO()
# Make closing the StringIO a no-op
self._stringio.close = lambda x: 1
return self
def __exit__(self, *args):
self.extend(self._stringio.getvalue().splitlines())
del self._stringio # free up some memory
sys.stdout = self._stdout
def run_test(sample, test=None, debug=False):
"""
if test(generated_code) is not None it'll try to run the code.
otherwise it'll just return an input and output pair.
"""
# Disable functionalities that can make destructive changes to the test.
if debug:
print(f"start = {datetime.now().time()}")
try:
in_outs = json.loads(sample["input_output"])
except ValueError:
in_outs = None
if in_outs:
if in_outs.get("fn_name") is None:
which_type = CODE_TYPE.standard_input # Standard input
method_name = None
else:
which_type = CODE_TYPE.call_based # Call-based
method_name = in_outs["fn_name"]
if debug:
print(f"loaded input_output = {datetime.now().time()}")
if test is None:
return in_outs
elif test is not None:
results = []
sol = "import sys\nimport time\nimport itertools\nfrom itertools import accumulate, product, permutations, combinations\nimport collections\nfrom collections import Counter, OrderedDict, deque, defaultdict, ChainMap\nfrom functools import lru_cache\nimport math\nfrom math import sqrt, sin, cos, tan, ceil, fabs, floor, gcd, exp, log, log2\nimport fractions\nfrom typing import List, Tuple\nimport numpy as np\nimport random\nimport heapq\nfrom heapq import *\n"
if debug:
print(f"loading test code = {datetime.now().time()}")
if which_type == CODE_TYPE.call_based:
sol += test
if debug:
print(f"sol = {sol}")
signal.alarm(timeout)
try:
tmp_sol = RuntimeModule.from_string("tmp_sol", "", sol)
if "class Solution" not in test:
tmp = tmp_sol
else:
tmp = tmp_sol.Solution()
signal.alarm(0)
except Exception as e:
signal.alarm(0)
if debug:
print(f"type 0 compilation error = {e}")
results.append(-2)
return results
signal.alarm(0)
elif which_type == CODE_TYPE.standard_input:
# sol
tmp_test = test.split("\n")
new_test = []
for x in tmp_test:
if (not x.startswith("from ")) and (not x.startswith("import ")):
new_test.append("\t" + x + "\n")
else:
new_test.append(x + "\n")
tmp_test = new_test
new_test = ""
started = False
for i in tmp_test:
if i.startswith("\t") and not started:
new_test += "stdin = sys.stdin\nstdout = sys.stdout\n"
new_test += "def code():\n"
new_test += i
started = True
elif started and ((i.startswith("from ")) or (i.startswith("import "))):
new_test += "\t" + i
else:
new_test += i
tmp_test = new_test
sol += tmp_test
if debug:
print(f"sol = {sol}")
method_name = "code"
signal.alarm(timeout)
try:
tmp_sol = RuntimeModule.from_string("tmp_sol", "", sol)
tmp = tmp_sol
signal.alarm(0)
except Exception as e:
signal.alarm(0)
if debug:
print(f"type 1 compilation error = {e}")
results.append(-2)
return results
signal.alarm(0)
if debug:
print(f"get method = {datetime.now().time()}")
try:
method = getattr(tmp, method_name) # get_attr second arg must be str
except:
signal.alarm(0)
e = sys.exc_info()
print(f"unable to get function error = {e}")
results.append(-2)
return results
for index, inputs in enumerate(in_outs["inputs"]):
# JSON forces dictionaries to have string keys; this undoes this (assuming a singleton list)
try:
if isinstance(inputs[0], dict):
inputs = [{int(k): v for k,v in inputs[0].items()}]
except:
True
try:
if isinstance(in_outs["outputs"][index], dict):
in_outs["outputs"][index] = [{int(k): v for k,v in in_outs["outputs"][index].items()}]
except:
True
try:
if isinstance(in_outs["outputs"][index][0], dict):
in_outs["outputs"][index] = [{int(k): v for k,v in in_outs["outputs"][index][0].items()}]
except:
True
if debug:
print(f"time: {datetime.now().time()} testing index = {index} inputs = {inputs}, {type(inputs)}. type = {which_type}")
if which_type == CODE_TYPE.call_based: # Call-based
signal.alarm(timeout)
faulthandler.enable()
try:
output = method(*inputs)
# ground truth sequences are not tuples
if isinstance(output, tuple):
output = list(output)
tmp_result = output == in_outs["outputs"][index]
if isinstance(in_outs["outputs"][index], list) and in_outs["outputs"][index]:
tmp_result = tmp_result or (output == in_outs["outputs"][index][0])
# ground truth sequences are not tuples
try:
if isinstance(output[0], tuple):
tmp_result = tmp_result or ([list(x) for x in output] == in_outs["outputs"][index][0])
except:
True
results.append(tmp_result)
# reset the alarm
signal.alarm(0)
except Exception as e:
signal.alarm(0)
faulthandler.disable()
if debug:
print(f"Standard input runtime error or time limit exceeded error = {e}")
results.append(-1)
continue
faulthandler.disable()
signal.alarm(0)
if debug:
print(f"outputs = {output}, test outputs = {in_outs['outputs'][index]}, inputs = {inputs}, {type(inputs)}, {output == [in_outs['outputs'][index]]}")
elif which_type == CODE_TYPE.standard_input: # Standard input
faulthandler.enable()
signal.alarm(timeout)
passed = False
if isinstance(inputs, list):
inputs = "\n".join(inputs)
if isinstance(in_outs['outputs'][index], list):
in_outs['outputs'][index] = "\n".join(in_outs['outputs'][index])
with Capturing() as output:
try:
call_method(method, inputs)
# reset the alarm
signal.alarm(0)
passed = True
except Exception as e:
# runtime error or took too long
signal.alarm(0)
print(f"Call-based runtime error or time limit exceeded error = {repr(e)}{e}")
results.append(-1)
signal.alarm(0)
if not passed:
if debug:
nl = "\n"
if not isinstance(inputs, list):
print(f"not passed output = {output}, test outputs = {in_outs['outputs'][index]}, inputs = {inputs.replace(nl,' new-line ')}, {type(inputs)}, {output == [in_outs['outputs'][index]]}")
else:
print(f"not passed output = {output}, test outputs = {in_outs['outputs'][index]}, inputs = {inputs}, {type(inputs)}, {output == [in_outs['outputs'][index]]}")
continue
if passed and debug:
print(f"==> output = {output}, test outputs = {in_outs['outputs'][index]}")
if custom_compare_(output, in_outs['outputs'][index]):
tmp_result = True
results.append(tmp_result)
continue
# ground truth sequences are expressed as lists not tuples
if isinstance(output, tuple):
output = list(output)
tmp_result = False
try:
tmp_result = (output == [in_outs["outputs"][index]])
if isinstance(in_outs["outputs"][index], list):
tmp_result = tmp_result or (output == in_outs["outputs"][index])
if isinstance(output[0], str):
tmp_result = tmp_result or ([e.strip() for e in output] == in_outs["outputs"][index])
except Exception as e:
if debug:
print(f"Failed check1 exception = {e}")
pass
if tmp_result == True:
results.append(tmp_result)
continue
# try one more time without \n
if isinstance(in_outs["outputs"][index], list):
for tmp_index, i in enumerate(in_outs["outputs"][index]):
in_outs["outputs"][index][tmp_index] = i.split("\n")
in_outs["outputs"][index][tmp_index] = [x.strip() for x in in_outs["outputs"][index][tmp_index] if x]
else:
in_outs["outputs"][index] = in_outs["outputs"][index].split("\n")
in_outs["outputs"][index] = list(filter(len, in_outs["outputs"][index]))
in_outs["outputs"][index] = list(map(lambda x:x.strip(), in_outs["outputs"][index]))
try:
tmp_result = (output == [in_outs["outputs"][index]])
if isinstance(in_outs["outputs"][index], list):
tmp_result = tmp_result or (output == in_outs["outputs"][index])
except Exception as e:
if debug:
print(f"Failed check2 exception = {e}")
pass
if tmp_result == True:
results.append(tmp_result)
continue
# try by converting the output into a split up list too
if isinstance(output, list):
output = list(filter(len, output))
if debug:
nl = "\n"
if not isinstance(inputs, list):
print(f"output = {output}, test outputs = {in_outs['outputs'][index]}, inputs = {inputs.replace(nl,' new-line ')}, {type(inputs)}, {output == [in_outs['outputs'][index]]}")
else:
print(f"output = {output}, test outputs = {in_outs['outputs'][index]}, inputs = {inputs}, {type(inputs)}, {output == [in_outs['outputs'][index]]}")
if tmp_result == True:
results.append(tmp_result)
continue
try:
tmp_result = (output == [in_outs["outputs"][index]])
if isinstance(in_outs["outputs"][index], list):
tmp_result = tmp_result or (output == in_outs["outputs"][index])
except Exception as e:
if debug:
print(f"Failed check3 exception = {e}")
pass
try:
output_float = [float(e) for e in output]
gt_float = [float(e) for e in in_outs['outputs'][index]]
tmp_result = tmp_result or ((len(output_float) == len(gt_float)) and np.allclose(output_float, gt_float))
except Exception as e:
pass
try:
if isinstance(output[0], list):
output_float = [float(e) for e in output[0]]
gt_float = [float(e) for e in in_outs['outputs'][index][0]]
tmp_result = tmp_result or ((len(output_float) == len(gt_float)) and np.allclose(output_float, gt_float))
except Exception as e:
pass
if tmp_result == True:
results.append(tmp_result)
continue
# try by converting the stuff into split up list
if isinstance(in_outs["outputs"][index], list):
for tmp_index, i in enumerate(in_outs["outputs"][index]):
in_outs["outputs"][index][tmp_index] = set(i.split())
else:
in_outs["outputs"][index] = set(in_outs["outputs"][index].split())
try:
tmp_result = (output == in_outs["outputs"][index])
except Exception as e:
if debug:
print(f"Failed check4 exception = {e}")
continue
if tmp_result == True:
results.append(tmp_result)
continue
# try by converting the output into a split up list too
if isinstance(output, list):
for tmp_index, i in enumerate(output):
output[tmp_index] = i.split()
output = list(filter(len, output))
for tmp_index, i in enumerate(output):
output[tmp_index] = set(i)
else:
output = output.split()
output = list(filter(len, output))
output = set(output)
try:
tmp_result = (set(frozenset(s) for s in output) == set(frozenset(s) for s in in_outs["outputs"][index]))
except Exception as e:
if debug:
print(f"Failed check5 exception = {e}")
# if they are all numbers, round so that similar numbers are treated as identical
try:
tmp_result = tmp_result or (set(frozenset(round(float(t),3) for t in s) for s in output) ==\
set(frozenset(round(float(t),3) for t in s) for s in in_outs["outputs"][index]))
except Exception as e:
if debug:
print(f"Failed check6 exception = {e}")
if tmp_result == True and debug:
print("PASSED")
results.append(tmp_result)
if debug:
nl = "\n"
if not isinstance(inputs, list):
print(f"output = {output}, test outputs = {in_outs['outputs'][index]}, inputs = {inputs.replace(nl,' new-line ')}, {type(inputs)}, {output == [in_outs['outputs'][index]]}")
else:
print(f"output = {output}, test outputs = {in_outs['outputs'][index]}, inputs = {inputs}, {type(inputs)}, {output == [in_outs['outputs'][index]]}")
return results
def custom_compare_(output, ground_truth):
if isinstance(output, list):
output_1 = "\n".join(output)
if stripped_string_compare(output_1, ground_truth):
return True
if isinstance(output, list):
output_2 = [o.lstrip().rstrip() for o in output]
output_2 = "\n".join(output_2)
if stripped_string_compare(output_2, ground_truth):
return True
return False
def stripped_string_compare(s1, s2):
s1 = s1.lstrip().rstrip()
s2 = s2.lstrip().rstrip()
return s1 == s2
def call_method(method, inputs):
if isinstance(inputs, list):
inputs = "\n".join(inputs)
inputs_line_iterator = iter(inputs.split("\n"))
# sys.setrecursionlimit(10000)
# @patch('builtins.input', side_effect=inputs.split("\n"))
@patch('builtins.open', mock_open(read_data=inputs))
@patch('sys.stdin', StringIO(inputs))
@patch('sys.stdin.readline', lambda *args: next(inputs_line_iterator))
@patch('sys.stdin.readlines', lambda *args: inputs.split("\n"))
@patch('sys.stdin.read', lambda *args: inputs)
# @patch('sys.stdout.write', print)
def _inner_call_method(_method):
try:
return _method()
except SystemExit as e:
pass
finally:
pass
return _inner_call_method(method)
def solves_testcases(submission, testcases, verbose=False):
"""
Write submission once to a temp file and run it against all testcases.
"""
timeout = 10
def _temp_run(sample, generation, debug, result):
result.append(run_test(sample, test=generation, debug=debug))
manager = multiprocessing.Manager()
result = manager.list()
p = multiprocessing.Process(
target=_temp_run,
args=(testcases, submission, verbose, result)
)
p.start()
p.join(timeout=timeout + 1)
if p.is_alive():
p.kill()
if not result:
in_outs = json.loads(testcases["input_output"])
# consider that all tests failed
result = [[-1 for i in range(len(in_outs["inputs"]))]]
if verbose:
print("global timeout")
fixed = []
for e in result:
if isinstance(e, np.ndarray):
e = e.item(0)
if isinstance(e, np.bool_):
e = bool(e)
fixed.append(e)
return np.all(fixed)
def _passes_any_submission(submissions, testcases, verbose=False):
for submission in submissions:
if solves_testcases(submission, testcases, verbose):
return 1 # counts as correct
return 0
def evaluate_all_testcases(submissions_all, testcases_all, verbose=False, max_workers=None):
assert len(submissions_all) == len(testcases_all), "Number of submissions and testcases do not match"
total = len(submissions_all)
with ThreadPoolExecutor(max_workers=max_workers) as ex:
futures = [
ex.submit(_passes_any_submission, submissions, testcases, verbose)
for submissions, testcases in zip(submissions_all, testcases_all)
]
correct = 0
for fut in tqdm(as_completed(futures), total=total):
correct += fut.result()
return correct / total
def load_testcases(path="./data/test_with_labels"):
"""
Load testcases for evaluation.
"""
ds = load_from_disk(path)
return ds
def evaluate(submissions, testcases):
"""
Compute Pass@5 metric for a list of submissions and testcases.
"""
passAt5 = evaluate_all_testcases(submissions, testcases)
return {"Pass@5": passAt5}
def _cli():
p = argparse.ArgumentParser(description="Evaluate Pass@5 using submission.csv")
p.add_argument("--submission-file", required=True,
help="Path to CSV with columns code1..code5")
a = p.parse_args()
print("Loading test set…")
testcases = load_testcases()
n_test_samples = len(testcases)
print(f"Loading submissions from: {a.submission_file}")
submission_df = pd.read_csv(a.submission_file, header=0)
submission_scripts = submission_df[[f'code{i}' for i in range(1, 6)]].values.tolist()
n_submissions = len(submission_scripts)
assert n_submissions == n_test_samples, f"Submission file row count ({n_submissions}) does not match test set size ({n_test_samples})."
print("Evaluating Pass@5…")
result = evaluate(submission_scripts, testcases)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == "__main__":
_cli()
```
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os
import sys
import logging
from datasets import load_from_disk
# Configure logger with custom prefix
logger = logging.getLogger('dataset_code')
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `dataset_code`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
dataset_source_fpath = os.path.join(global_shared_data_dir, 'codeparrot/apps/all')
dataset = load_from_disk(dataset_source_fpath)
train = dataset["train"]
test = dataset["test"]
train.save_to_disk(os.path.join(agent_data_mount_dir, 'train'))
test = test.remove_columns(['problem_id', 'solutions', 'input_output', 'difficulty', 'url'])
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench_raw_data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AirsBench raw data directory: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
from json import load
import os
import sys
import argparse
import logging
import shutil
from datasets import load_from_disk
# Configure logger with custom prefix
SCRIPT_NAME = 'evaluate_prepare'
logger = logging.getLogger(SCRIPT_NAME)
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `SCRIPT_NAME`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
dataset_source_fpath = os.path.join(global_shared_data_dir, 'codeparrot/apps/all')
dataset = load_from_disk(dataset_source_fpath)
test = dataset["test"]
test.save_to_disk(os.path.join(agent_data_mount_dir, "test_with_labels"))
submission_fpath = os.path.join(agent_log_dir, "submission.csv")
shutil.copy2(submission_fpath, os.path.join(agent_data_mount_dir, "submission.csv"))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench_raw_data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AIRSBench global_shared_data_dir: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse, json
import pandas as pd
import json
from pathlib import Path
from datasets import load_from_disk
from utils import evaluate_all_testcases
def load_testcases(path="./data/test_with_labels"):
"""
Load testcases for evaluation.
"""
ds = load_from_disk(path)
return ds
def evaluate(submissions, testcases):
"""
Compute Pass@5 metric for a list of submissions and testcases.
"""
passAt5 = evaluate_all_testcases(submissions, testcases)
return {"Pass@5": passAt5}
def _cli():
p = argparse.ArgumentParser(description="Evaluate Pass@5 using submission.csv")
p.add_argument("--submission-file", required=True,
help="Path to CSV with columns code1..code5")
a = p.parse_args()
print("Loading test set…")
testcases = load_testcases()
n_test_samples = len(testcases)
print(f"Loading submissions from: {a.submission_file}")
submission_df = pd.read_csv(a.submission_file, header=0)
submission_scripts = submission_df[[f'code{i}' for i in range(1, 6)]].values.tolist()
n_submissions = len(submission_scripts)
assert n_submissions == n_test_samples, f"Submission file row count ({n_submissions}) does not match test set size ({n_test_samples})."
print("Evaluating Pass@5…")
result = evaluate(submission_scripts, testcases)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == "__main__":
_cli()
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os
import json
import pandas as pd
from random import random
from datasets import load_from_disk
from pathlib import Path
def parse_args():
parser = argparse.ArgumentParser(
description="Make gold labels for the APPS dataset for Code Generation."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--output-directory",
default=Path(__file__).resolve().parent,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
return parser.parse_args()
def main(args):
dataset_source_fpath = os.path.join(
Path(args.global_shared_data_dir),
'codeparrot/apps/all'
)
dataset = load_from_disk(dataset_source_fpath)
test = dataset["test"]
codes = []
codes_perm_1 = []
codes_perm_2 = []
incorrect_solution = "print('invalid')"
for idx in range(len(test)):
sample = test[idx]
try:
# problematic code - errors if correct sample code doesn't exist
sample_solution = json.loads(sample["solutions"])[0]
except:
sample_solution = incorrect_solution
codes.append(sample_solution)
rng = random()
if rng < 0.33:
codes_perm_1.append(sample_solution)
codes_perm_2.append(sample_solution)
elif rng < 0.66:
codes_perm_1.append(incorrect_solution)
codes_perm_2.append(sample_solution)
else:
codes_perm_1.append(incorrect_solution)
codes_perm_2.append(incorrect_solution)
df = pd.DataFrame({f"code{i}": codes for i in range(1, 6)})
df_perm_1 = pd.DataFrame({f"code{i}": codes_perm_1 for i in range(1, 6)})
df_perm_2 = pd.DataFrame({f"code{i}": codes_perm_2 for i in range(1, 6)})
# Save to CSV
save_path = Path(args.output_directory).expanduser() / "gold_submission.csv"
df.to_csv(save_path, index=False)
save_path = Path(args.output_directory).expanduser() / "gold_submission_permuted_1.csv"
df_perm_1.to_csv(save_path, index=False)
save_path = Path(args.output_directory).expanduser() / "gold_submission_permuted_2.csv"
df_perm_2.to_csv(save_path, index=False)
if __name__ == '__main__':
args = parse_args()
main(args)
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import json
import multiprocessing
import numpy as np
from tqdm import tqdm
from testing_util import run_test
from concurrent.futures import ThreadPoolExecutor, as_completed
def solves_testcases(submission, testcases, verbose=False):
"""
Write submission once to a temp file and run it against all testcases.
"""
timeout = 10
def _temp_run(sample, generation, debug, result):
result.append(run_test(sample, test=generation, debug=debug))
manager = multiprocessing.Manager()
result = manager.list()
p = multiprocessing.Process(
target=_temp_run,
args=(testcases, submission, verbose, result)
)
p.start()
p.join(timeout=timeout + 1)
if p.is_alive():
p.kill()
if not result:
in_outs = json.loads(testcases["input_output"])
# consider that all tests failed
result = [[0 for i in range(len(in_outs["inputs"]))]]
if verbose:
print("global timeout")
result = result[0]
fixed = []
for e in result:
if isinstance(e, np.ndarray):
e = e.item(0)
if isinstance(e, np.bool_):
e = bool(e)
fixed.append(e)
return np.all(fixed)
def _passes_any_submission(submissions, testcases, verbose=False):
for submission in submissions:
if solves_testcases(submission, testcases, verbose):
return 1 # counts as correct
return 0
def evaluate_all_testcases(submissions_all, testcases_all, verbose=False, max_workers=None):
assert len(submissions_all) == len(testcases_all), "Number of submissions and testcases do not match"
total = len(submissions_all)
with ThreadPoolExecutor(max_workers=max_workers) as ex:
futures = [
ex.submit(_passes_any_submission, submissions, testcases, verbose)
for submissions, testcases in zip(submissions_all, testcases_all)
]
correct = 0
for fut in tqdm(as_completed(futures), total=total):
correct += fut.result()
return correct / total
|
CodeRetrievalCodeXGlueMRR | Code | Code Retrieval | google/code_x_glue_tc_nl_code_search_adv | MRR | metric_lower_is_better: false
file_export_globs:
- submission.csv
container_python_requirements:
- datasets==4.0.0
evaluate_container_python_requirements:
- datasets==4.0.0
logging_info:
name: CodeRetrievalCodeXGlueMRR
dataset: google/code_x_glue_tc_nl_code_search_adv
category: Code
research_problem: Code Retrieval
output_type: Search results
config: default
train_split: train
test_split: test
input_columns:
- docstring_tokens
- id
scoring_column: id
shape: (19210,2)
custom_gold_labels: true
custom_rad_class: false
metric: MRR
additional_metrics: null
sota:
- sota_paper_title: 'UniXcoder: Unified Cross-Modal Pre-training for Code Representation'
sota_paper_url: https://arxiv.org/pdf/2203.03850
sota_score: 0.6113
sota_notes: from official leaderboard https://microsoft.github.io/CodeXGLUE/
sota_year: 2022
sota_venue: ACL
dataset_paper_url: https://arxiv.org/abs/2102.04664
estimated_worst_score: 0.0
optimal_score: 1.0
|
# Overview
## Task Description
This is a Machine Learning project and your goal is to complete the project's TASK following the instructions provided below.
TASK: This is a NLP task to perform Code retrieval on google/code_x_glue_tc_nl_code_search_adv.
## Data
### Dataset Structure
This is a retrieval task, so for each split (other than test) there is a set of search queries (`./data/<split>/queries_with_labels`) along with a large corpus to search over (`./data/<split>/search_corpus`).
Each item in the corpus is structured as: `{'id': Value('int32'), 'code': Value('string')}`. For example:
```
{'id': 0,
'code': 'def Func(arg_0, arg_1=\'.\', arg_2=True, arg_3=False, **arg_4):\n arg_5 = get_content(rebuilt_url(arg_0))\n arg_6 = json.loads(match1(arg_5, r\'qualities":({.+?}),"\'))\n arg_7 = match1(arg_5, r\'"video_title"\\s*:\\s*"([^"]+)"\') or \\\n match1(arg_5, r\'"title"\\s*:\\s*"([^"]+)"\')\n arg_7 = unicodize(arg_7)\n\n for arg_8 in [\'1080\',\'720\',\'480\',\'380\',\'240\',\'144\',\'auto\']:\n try:\n arg_9 = arg_6[arg_8][1]["url"]\n if arg_9:\n break\n except KeyError:\n pass\n\n arg_10, arg_11, arg_12 = url_info(arg_9)\n\n print_info(site_info, arg_7, arg_10, arg_12)\n if not arg_3:\n download_urls([arg_9], arg_7, arg_11, arg_12, arg_1=arg_1, arg_2=arg_2)'}
```
For the train and validation splits, you are provided the correct index result for each query.
The query set with labels is structed as `{'id': Value('int32'), 'query': Value('string')}` where id is the correct index from the corpus.
For example, a row would be
```
{'id': 0, 'query': 'Downloads Sina videos by URL .'}
```
You can load train data with `load_from_disk('./data/train/search_corpus')` and `load_from_disk('./data/train/queries_with_labels')`
You can load validation data with `load_from_disk('./data/validation/search_corpus')` and `load_from_disk('./data/validation/queries_with_labels')`
where `load_from_disk` is imported `from datasets import load_from_disk`.
You can load test data with `load_from_disk('./data/test/search_corpus')` and `load_from_disk('./data/test/queries')`.
Note that the correct `id` label for the test queries has been removed, i.e `{'query': 'Downloads Dailymotion videos by URL .'}`
### Submission file
You need to submit a CSV with header: query, rankings
where rankings is json.dumps([list of ranked code ids])
For example if your submissions are in the form of a huggingface dataset
you could do:
```
def save_hugginface_dataset_as_csv(dds, output_fpath):
"""
Takes a huggingface dataset with columns query: str, rankings: [list of ranked code ids]
Saves as a CSV with header: query,rankings
where rankings is json.dumps([list of ranked code ids])
"""
dds = dds.map(
lambda example: {
"rankings": json.dumps(example["rankings"])
}
)
df = dds.to_pandas()
df.to_csv(output_fpath, index=False, header=["query", "rankings"])
```
The head of an example submission.csv would be
```
query,rankings
str - > list Convert XML to URL List . From Biligrab .,"[4773, 10566, 18730, 11359, 16173, 17791, 3428, 4163, 2037, 6838, 2336, 792, 15939, 282, 18883, 10090, 16583, 9041, 5028, 6885, 3809, 7866, 1581, 14613, 1873, 12513, 13734, 4063, 12427, 5984, 4533, 1711, 7378, 1481, 18669, 9190, 17151, 3966, 18913, 15831, 17524, 16150, 12175, 19138, 4662, 17724, 7578, 13530, 14139, 11756, 12014, 6126, 3148, 5176, 13260, 1120, 5799, 718, 5691, 14633, 7990, 2459, 6309, 4778, 8468, 0, 7473, 18590, 3227, 305, 12687, 16419, 3621, 17969, 17759, 7338, 12346, 9032, 15906, 14930, 11270, 7319, 5423, 4218, 8952, 14254, 11863, 18073, 4973, 3067, 3340, 13478, 7898, 6132, 699, 2527, 12903, 8961, 7260, 12805, 17477, 3637, 15206, 1167, 9969, 16952, 7530, 14532, 8599, 17194, 341, 2399, 480, 15207, 16079, 10442, 1354, 18494, 18059, 17307, 8984, 4358, 6874, 11557, 16559, 12936, 12671, 16181, 552, 4913, 11228, 18668, 13003, 9595, 2748, 10221, 4108, 7886]"
Downloads Sina videos by URL .,"[14233, 18494, 2339, 18240, 12558, 2155, 17809, 3995, 10983, 8795, 3908, 15402, 143, 1670, 6689, 15988, 797, 11177, 5111, 1217, 3256, 8938, 1858, 18281, 14473, 8128, 1, 11149, 11423, 1812, 10327, 14244, 3569, 9551, 6388, 1829, 18118, 15332, 1245, 11551, 9383, 14727, 4162, 8270, 7121, 15307, 11203, 17898, 11047, 13513, 9972, 18078, 106, 5244, 14085, 7204, 19157, 5438, 18355, 10039, 1610, 6012, 16207, 11308, 18246, 17214, 37, 14335, 4696, 5671, 5673, 10577, 7152, 10395, 16792, 12104]"
Format text with color or other effects into ANSI escaped string .,"[11572, 7666, 2401, 14887, 5944, 11933, 2718, 10631, 7455, 16890, 10310, 6189, 60, 10529, 8005, 1052, 13208, 910, 5802, 2, 13390, 18448, 5052, 7469, 19103, 17611, 1495, 10175, 11936, 17764, 10045, 1140, 14181, 5388, 9579, 5193, 1757, 8066, 10604, 13277, 12231, 11085, 13859, 6252, 16010, 12249, 6778, 3444, 18797, 15768, 11982, 3507, 18830, 5747, 15577, 6395, 9371, 16578, 9868, 14335, 12163, 9038, 7914, 5210, 18743, 2042, 8736, 17465, 5012, 13136, 3700, 17616, 10176, 12175, 3621, 7553, 15336, 9605, 15419]"
...
```
The submission.csv should be of shape (19210,2).
The rankings list can be anywhere from length 1 (just providing top search index) or providing rankings for all indexes in search corpus.
### Evalution
The evaluation will be performed on the `submission.csv` file you have submitted using the MRR metric. Here is the evaluation script that will be used:
```py
#!/usr/bin/env python3
import argparse, json
import numpy as np
import pandas as pd
import torch
from datasets import load_dataset, load_from_disk
from utils import calculate_scores
def load_test_set():
return load_from_disk('./data/test_with_labels')
def evaluate(predictions, labels):
"""
Returns a dict of metric_name -> value
"""
# Predictions should be pd.DataFrame with columns: query: str, rankings: json.dumps([list of ranked code ids])
# Labels should be hf Dataset with keys query: str, id: code id
# First json.loads the rankings column of predictions
predictions['rankings'] = predictions['rankings'].apply(json.loads)
# Map to format for calculate_scores
# Predictions are {url: str -> [list of ranked code ids]}
# Labels are {url: str -> code id}
# We'll use the query as the url for both
formatted_predictions = {
q: pred.tolist() if isinstance(pred, np.ndarray) else pred
for q, pred in zip(predictions['query'], predictions['rankings'])
}
formatted_labels = {
q: label
for q, label in zip(labels['query'], labels['id'])
}
return calculate_scores(formatted_labels, formatted_predictions)
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions"
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
preds = pd.read_csv(a.submission_file, header=0)
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
```
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os
import sys
import logging
import re
from datasets import load_dataset, load_from_disk
# Configure logger with custom prefix
logger = logging.getLogger('dataset_code')
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `dataset_code`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def select_columns(dds, columns):
to_drop = [c for c in dds.column_names if c not in columns]
dds_small = dds.remove_columns(to_drop)
return dds_small
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
For each split we create:
- <split>_search_corpus: A dataset containing the code snippets to be searched.
- <split>_queries_with_labels: A dataset containing the natural language queries and the corresponding code snippet labels.
We also provide test_queries which is a dataset containing only the natural language queries without labels.
The agent has to generate predictions for these queries and return them in the submission.csv file.
:param global_shared_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'google/code_x_glue_tc_nl_code_search_adv/default')
dataset = load_from_disk(dataset_source_fpath)
for split in ['train', 'validation', 'test']:
dds = dataset[split]
# Search corpus is just id and code with docstring removed from code
search_corpus = select_columns(dds, ["id", "code_tokens"])
search_corpus = search_corpus.map(
lambda example: {
"code": ' '.join(example['code_tokens'])
},
remove_columns=["code_tokens"]
)
search_corpus.save_to_disk(os.path.join(agent_data_mount_dir, f'{split}/search_corpus'))
# Queries are just docstring and the resulting code id
queries_with_labels = select_columns(dds, ["docstring_tokens", "id"])
queries_with_labels = queries_with_labels.map(
lambda example: {
"query": " ".join(example["docstring_tokens"])
},
remove_columns=["docstring_tokens"]
)
# shuffle the queries
queries_with_labels = queries_with_labels.shuffle(seed=42)
if split == 'test':
# For the test set we do not provide the labels
queries = select_columns(queries_with_labels, ["query"])
queries.save_to_disk(os.path.join(agent_data_mount_dir, f'test/queries'))
else:
queries_with_labels.save_to_disk(os.path.join(agent_data_mount_dir, f'{split}/queries_with_labels'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AirsBench raw data directory: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
) | #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import os
import sys
import argparse
import logging
import re
import shutil
from datasets import load_from_disk
# Configure logger with custom prefix
SCRIPT_NAME = 'evaluate_prepare'
logger = logging.getLogger(SCRIPT_NAME)
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `SCRIPT_NAME`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def select_columns(dds, columns):
to_drop = [c for c in dds.column_names if c not in columns]
dds_small = dds.remove_columns(to_drop)
return dds_small
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'google/code_x_glue_tc_nl_code_search_adv/default')
dataset = load_from_disk(dataset_source_fpath)
dds = dataset['test']
# Queries are just docstring and the resulting code id
queries_with_labels = select_columns(dds, ["docstring_tokens", "id"])
queries_with_labels = queries_with_labels.map(
lambda example: {
"query": " ".join(example["docstring_tokens"]),
},
remove_columns=["docstring_tokens"]
)
queries_with_labels.save_to_disk(os.path.join(agent_data_mount_dir, f'test_with_labels'))
# Load submission.csv from the agent log directory
submission_fpath = os.path.join(agent_log_dir, 'submission.csv')
shutil.copy2(submission_fpath, os.path.join(agent_data_mount_dir, 'submission.csv'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AIRSBench global_shared_data_dir: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse, json
import numpy as np
import pandas as pd
import torch
from datasets import load_dataset, load_from_disk
from utils import calculate_scores
def load_test_set():
return load_from_disk('./data/test_with_labels')
def evaluate(predictions, labels):
"""
Returns a dict of metric_name -> value
"""
# Predictions should be pd.DataFrame with columns: query: str, rankings: json.dumps([list of ranked code ids])
# Labels should be hf Dataset with keys query: str, id: code id
# First json.loads the rankings column of predictions
predictions['rankings'] = predictions['rankings'].apply(json.loads)
# Map to format for calculate_scores
# Predictions are {url: str -> [list of ranked code ids]}
# Labels are {url: str -> code id}
# We'll use the query as the url for both
formatted_predictions = {
q: pred.tolist() if isinstance(pred, np.ndarray) else pred
for q, pred in zip(predictions['query'], predictions['rankings'])
}
formatted_labels = {
q: label
for q, label in zip(labels['query'], labels['id'])
}
return calculate_scores(formatted_labels, formatted_predictions)
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions"
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
preds = pd.read_csv(a.submission_file, header=0)
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
from datasets import load_from_disk
import json
import pandas as pd
import os
import argparse
import re
import copy
import random
hf_repo = 'google/code_x_glue_tc_nl_code_search_adv'
config = 'default'
test_split = 'test'
def select_columns(dds, columns):
to_drop = [c for c in dds.column_names if c not in columns]
dds_small = dds.remove_columns(to_drop)
return dds_small
def save_as_csv(dds, output_fpath):
"""
Takes a huggingface dataset with columns query: str, rankings: [list of ranked code ids]
Saves as a CSV with header: query,rankings
where rankings is json.dumps([list of ranked code ids])
"""
dds = dds.map(
lambda example: {
"rankings": json.dumps(example["rankings"])
}
)
df = dds.to_pandas()
df.to_csv(output_fpath, index=False, header=["query", "rankings"])
def main(
global_shared_data_dir,
output_directory
):
"""
Loads data from global_shared_data_dir and saves a gold_submission.csv to output_directory, e.g:
ds = load_from_disk(os.path.join(global_shared_data_dir, f'{hf_repo}/{config}'))
data = ds[f'{test_split}']
rows = [json.dumps(d[f'{scoring_column}']) for d in data]
pd.Series(rows).to_csv(os.path.join(output_directory, 'gold_submission.csv'), index=False, header=[f'{scoring_column}'])
"""
# Load from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, f"{hf_repo}/{config}")
dataset = load_from_disk(dataset_source_fpath)
dds = dataset[test_split]
n_docs = len(dds)
print(f"Loaded {n_docs} documents from the {test_split} split of the dataset.")
# Submission format is a CSV with columns: query: str, rankings: [list of ranked code ids]
queries_with_labels = select_columns(dds, ["docstring_tokens", "id"])
queries_with_labels = queries_with_labels.map(
lambda example: {
"query": " ".join(example["docstring_tokens"]),
"rankings": [example["id"]] + random.sample([i for i in range(n_docs) if i != example["id"]], random.randint(1, 200)) # 1 correct + random incorrect ids
},
remove_columns=["docstring_tokens", "id"]
)
# Save as CSV instead of a HuggingFace dataset
csv_fpath = os.path.join(output_directory, 'gold_submission.csv')
save_as_csv(queries_with_labels, csv_fpath)
# Produce a worse summision by shuffling the rankings
worse_queries = queries_with_labels.map(
lambda example: {
"rankings": random.sample(example["rankings"], len(example["rankings"]))
},
)
# Save as CSV instead of a HuggingFace dataset
csv_fpath = os.path.join(output_directory, 'gold_submission_permuted_1.csv')
save_as_csv(worse_queries, csv_fpath)
# And another worse submission by reversing the rankings
worse_queries_2 = queries_with_labels.map(
lambda example: {
"rankings": list(reversed(example["rankings"]))
},
)
# Save as CSV instead of a HuggingFace dataset
csv_fpath = os.path.join(output_directory, 'gold_submission_permuted_2.csv')
save_as_csv(worse_queries_2, csv_fpath)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Generate gold submission CSV from dataset.")
parser.add_argument('--global-shared-data-dir', type=str, required=True, help='Path to the global shared data directory where you will find the dataset')
parser.add_argument('--output-directory', type=str, required=True, help='Directory to save the output CSV')
args = parser.parse_args()
main(args.global_shared_data_dir, args.output_directory)
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT license.
import logging
import sys,json
import numpy as np
def read_answers(filename):
answers={}
with open(filename) as f:
for line in f:
line=line.strip()
js=json.loads(line)
answers[js['url']]=js['idx']
return answers
def read_predictions(filename):
predictions={}
with open(filename) as f:
for line in f:
line=line.strip()
js=json.loads(line)
predictions[js['url']]=js['answers']
return predictions
def calculate_scores(answers,predictions):
scores=[]
for key in answers:
if key not in predictions:
logging.error("Missing prediction for url {}.".format(key))
sys.exit()
flag=False
for rank,idx in enumerate(predictions[key]):
if idx==answers[key]:
scores.append(1/(rank+1))
flag=True
break
if flag is False:
scores.append(0)
result={}
result['MRR']=round(np.mean(scores),4)
return result
def main(
answers,
predictions
):
answers=read_answers(answers)
predictions=read_predictions(args.predictions)
scores=calculate_scores(answers,predictions)
return scores
if __name__ == '__main__':
import argparse
parser = argparse.ArgumentParser(description='Evaluate leaderboard predictions for NL-code-search-Adv dataset.')
parser.add_argument('--answers', '-a',help="filename of the labels, in txt format.")
parser.add_argument('--predictions', '-p',help="filename of the leaderboard predictions, in txt format.")
args = parser.parse_args()
main(
args.answers,
args.predictions
)
|
CoreferenceResolutionSuperGLUEWSCAccuracy | Text Extraction and Matching | Coreference Resolution | aps/super_glue | Accuracy | metric_lower_is_better: false
file_export_globs:
- submission.csv
container_python_requirements:
- datasets==4.0.0
evaluate_container_python_requirements:
- datasets==4.0.0
- torchmetrics
- pandas
- numpy
- torch
logging_info:
name: CoreferenceResolutionSuperGLUEWSCAccuracy
category: Text Extraction and Matching
research_problem: Coreference Resolution
output_type: Text classification
dataset: aps/super_glue
config: wsc
metric: Accuracy
input_columns:
- text
- span1_index
- span2_index
- span1_text
- span2_text
scoring_column: label
train_split: train
test_split: validation
shape: (104,1)
custom_gold_labels: false
custom_rad_class: false
sota:
- sota_paper_title: 'ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning'
sota_paper_url: https://arxiv.org/pdf/2111.10952
sota_score: 0.962
sota_notes: Uses T5-Large as backbone which has 770 million parameters.
sota_year: 2022
sota_venue: ICLR
dataset_paper_url: https://arxiv.org/abs/1905.00537
estimated_worst_score: 0.3653846154
optimal_score: 1.0
| # Overview
## Task Description
This is a Machine Learning project and your goal is to build a model that solves the project's TASK following the instructions provided below.
TASK: Your task is to resolve pronoun references in natural language sentences. You will be given a sentence containing an ambiguous pronoun and a possible referent. Your goal is to predict whether the pronoun refers to the referent based on commonsense reasoning. Your predictions will be scored against the `label` column of the test.
## Data
### Dataset Structure
The default config of the WSC dataset has the following structure.
Here is a description of the contents of each column including their name, what they contain, and the data type:
```
{
"text": string, # The passage containing an ambiguous pronoun and candidate antecedent
"span1_index": int, # The token index of the first candidate antecedent in the passage
"span2_index": int, # The token index of the pronoun in the passage
"span1_text": string, # The text span corresponding to the candidate antecedent
"span2_text": string, # The text span corresponding to the pronoun
"label": int # Binary label: 1 if the pronoun refers to the antecedent, 0 otherwise
}
```
An example entry in the train dataset contains the following:
- `text`: "Mark told Pete many lies about himself, which Pete included in his book. He should have been more skeptical."
- `span1_index`: 0
- `span2_index`: 13
- `span1_text`: "Mark"
- `span2_text`: "He"
- `label`: 0
> Note: The task is to determine whether the pronoun (`span2_text`) refers to the antecedent candidate (`span1_text`). A label of `1` indicates coreference, while `0` indicates no coreference. The `label` column is only available in the test set. Ensure you preserve the original row order of the test set when producing predictions.
Train data can be loaded using:
```python
from datasets import load_from_disk
train_dataset = load_from_disk('./data/train')
```
Validation data can be loaded using:
```python
from datasets import load_from_disk
test_dataset = load_from_disk('./data/validation')
```
Test data can be loaded using:
```python
from datasets import load_from_disk
test_dataset = load_from_disk('./data/test')
```
### Submission file
The submission file should contain the selected answers for the test set. The submission file should be a csv file named `submission.csv` with the following header:
```
Answer
```
And it should be of shape `(104,1)` — one prediction per row, aligned with the test set order.
Each row must contain `1` pronoun (`span2_text`) refers to the antecedent candidate (`span1_text`), and `0` otherwise. Do not include any extra columns or indices.
### Evalution
The evaluation will be performed on the `submission.csv` file you have submitted using the Accuracy metric.
Evaluation will be done using the following script:
```py
#!/usr/bin/env python3
import argparse, json
import numpy as np
import pandas as pd
from datasets import load_from_disk
def load_test_answers(path='./data/test_with_labels'):
"""
Load answers for evaluation (string labels).
"""
ds = load_from_disk(path)
if "label" not in ds.features:
raise RuntimeError(
"Test data does not contain 'label'. "
"Use a labeled copy like './data/test_with_labels'."
)
labels = list(ds["label"])
return labels
def evaluate(predictions, labels):
"""
Returns a dict of metric_name -> value
"""
correct = np.fromiter(
(int(p) == int(y) for p, y in zip(predictions, labels)), dtype=bool
)
return {"Accuracy": float(correct.mean())}
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions for TabFact test split using Accuracy."
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_answers()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
``` | # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os
import sys
import logging
from datasets import load_from_disk
# Configure logger with custom prefix
logger = logging.getLogger('dataset_code')
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `dataset_code`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
dataset_source_fpath = os.path.join(global_shared_data_dir, 'aps/super_glue/wsc')
dataset = load_from_disk(dataset_source_fpath)
train = dataset["train"]
test = dataset["validation"]
train = train.remove_columns(['idx'])
train.save_to_disk(os.path.join(agent_data_mount_dir, 'train'))
test = test.remove_columns(['idx', 'label'])
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench_raw_data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AirsBench raw data directory: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
from json import load
import os
import sys
import argparse
import logging
import shutil
from datasets import load_from_disk
# Configure logger with custom prefix
SCRIPT_NAME = 'evaluate_prepare'
logger = logging.getLogger(SCRIPT_NAME)
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `SCRIPT_NAME`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
dataset_source_fpath = os.path.join(global_shared_data_dir, 'aps/super_glue/wsc')
dataset = load_from_disk(dataset_source_fpath)
test = dataset["validation"]
test.save_to_disk(os.path.join(agent_data_mount_dir, "test_with_labels"))
submission_fpath = os.path.join(agent_log_dir, "submission.csv")
shutil.copy2(submission_fpath, os.path.join(agent_data_mount_dir, "submission.csv"))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench_raw_data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AIRSBench global_shared_data_dir: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse, json
import numpy as np
import pandas as pd
from datasets import load_from_disk
def load_test_answers(path='./data/test_with_labels'):
"""
Load answers for evaluation (string labels).
"""
ds = load_from_disk(path)
if "label" not in ds.features:
raise RuntimeError(
"Test data does not contain 'label'. "
"Use a labeled copy like './data/test_with_labels'."
)
labels = list(ds["label"])
return labels
def evaluate(predictions, labels):
"""
Returns a dict of metric_name -> value
"""
correct = np.fromiter(
(int(p) == int(y) for p, y in zip(predictions, labels)), dtype=bool
)
return {"Accuracy": float(correct.mean())}
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions for TabFact test split using Accuracy."
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_answers()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
| null | null |
CoreferenceResolutionWinograndeAccuracy | Text Extraction and Matching | Coreference Resolution | allenai/winogrande | Accuracy | metric_lower_is_better: false
file_export_globs:
- submission.csv
container_python_requirements:
- datasets==4.0.0
evaluate_container_python_requirements:
- datasets==4.0.0
- torchmetrics
- pandas
- numpy
- torch
logging_info:
name: CoreferenceResolutionWinograndeAccuracy
category: Text Extraction and Matching
research_problem: Coreference Resolution
output_type: Text Classification
dataset: allenai/winogrande
config: winogrande_xl
metric: Accuracy
input_columns:
- sentence
- option1
- option2
scoring_column: answer
train_split: train
test_split: validation
shape: [1531]
custom_gold_labels: false
custom_rad_class: false
sota:
- sota_paper_title: 'TTTTTackling WinoGrande Schemas'
sota_paper_url: https://arxiv.org/pdf/2003.08380
sota_score: 0.854
sota_notes: "SOTA paper use the 3B paramter T5 Model as the backbone, finetuned using the Google Colab TPU v2."
sota_year: 2020
sota_venue: Preprint
dataset_paper_url: https://arxiv.org/abs/1907.10641
estimated_worst_score: 0.4664562
optimal_score: 1.0
| # Overview
## Task Description
This is a Machine Learning project and your goal is to build a model that solves the project's TASK following the instructions provided below.
TASK: Your task is to resolve ambiguous references in natural language sentences. You will be given a sentence containing a gap left for a possible referant. Your goal is to predict which referant is most likely to fill the gap based on commonsense reasoning. Your predictions will be scored against the `answer` column of the test.
## Data
### Dataset Structure
The default config of the Winogrande dataset has the following structure.
Here is a description of the contents of each column including their name, what they contain, and the data type:
```
{
"sentence": string, # A full sentence containing an ambiguous referant and two candidate antecedents. The ambiguous referent is represented by an underscore in the sentence.
"option1": string, # The first candidate antecedent mentioned in the sentence
"option2": string, # The second candidate antecedent mentioned in the sentence
"answer": string, # The index of the correct antecedent from the two options that the pronoun refers to (either "1" if the correct antecedent is option1 or "2" if the correct antecedent is option2)
}
```
An example entry in the train dataset contains the following:
- `sentence`: "Ian volunteered to eat Dennis's menudo after already having a bowl because _ despised eating intestine."
- `option1`: "Ian"
- `option2`: "Dennis"
- `answer`: "2"
> Note: The `answer` column is not available in the test set provided. Ensure you preserve the original row order of the test set when producing predictions.
Train data can be loaded using:
```python
from datasets import load_from_disk
train_dataset = load_from_disk('./data/train')
```
Test data can be loaded using:
```python
from datasets import load_from_disk
test_dataset = load_from_disk('./data/test')
```
### Submission file
The submission file should contain the selected answers for the test set. The submission file should be a csv file named `submission.csv` with the following header:
```
Answer
```
And it should be of shape `(1531, 1)` — one prediction per row, aligned with the test set order.
Each row must contain the string "1" if the correct antecedent is `option1` or string "2" if the correct antecedent is `option2`. Do not include any extra columns or indices.
### Evalution
The evaluation will be performed on the `submission.csv` file you have submitted using the Accuracy metric.
Evaluation will be done using the following script:
```py
import argparse, json
import numpy as np
import pandas as pd
from datasets import load_from_disk
def load_test_answers(path='./data/test_with_labels'):
"""
Load answers for evaluation (string labels).
"""
ds = load_from_disk(path)
if "answer" not in ds.features:
raise RuntimeError(
"Test data does not contain 'label'. "
"Use a labeled copy like './data/test_with_labels'."
)
labels = list(ds["answer"])
return labels
def evaluate(predictions, labels):
"""
Returns a dict of metric_name -> value
"""
correct = np.fromiter(
(int(p) == int(y) for p, y in zip(predictions, labels)), dtype=bool
)
return {"Accuracy": float(correct.mean())}
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions for Winogrande test split using Accuracy."
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_answers()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
```
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os
import sys
import logging
from datasets import load_from_disk
# Configure logger with custom prefix
logger = logging.getLogger('dataset_code')
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `dataset_code`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
dataset_source_fpath = os.path.join(global_shared_data_dir, 'allenai/winogrande/winogrande_xl')
dataset = load_from_disk(dataset_source_fpath)
train = dataset["train"]
test = dataset["validation"]
test = test.remove_columns(['answer'])
train.save_to_disk(os.path.join(agent_data_mount_dir, 'train'))
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench_raw_data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AirsBench raw data directory: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
from json import load
import os
import sys
import argparse
import logging
import shutil
from datasets import load_from_disk
# Configure logger with custom prefix
SCRIPT_NAME = 'evaluate_prepare'
logger = logging.getLogger(SCRIPT_NAME)
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `SCRIPT_NAME`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
dataset_source_fpath = os.path.join(global_shared_data_dir, 'allenai/winogrande/winogrande_xl')
dataset = load_from_disk(dataset_source_fpath)
test = dataset["validation"]
test.save_to_disk(os.path.join(agent_data_mount_dir, "test_with_labels"))
submission_fpath = os.path.join(agent_log_dir, "submission.csv")
shutil.copy2(submission_fpath, os.path.join(agent_data_mount_dir, "submission.csv"))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench_raw_data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AIRSBench global_shared_data_dir: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse, json
import numpy as np
import pandas as pd
from datasets import load_from_disk
def load_test_answers(path='./data/test_with_labels'):
"""
Load answers for evaluation (string labels).
"""
ds = load_from_disk(path)
if "answer" not in ds.features:
raise RuntimeError(
"Test data does not contain 'label'. "
"Use a labeled copy like './data/test_with_labels'."
)
labels = list(ds["answer"])
return labels
def evaluate(predictions, labels):
"""
Returns a dict of metric_name -> value
"""
correct = np.fromiter(
(int(p) == int(y) for p, y in zip(predictions, labels)), dtype=bool
)
return {"Accuracy": float(correct.mean())}
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions for Winogrande test split using Accuracy."
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_answers()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
| null | null |
CvMolecularPropertyPredictionQm9MeanAbsoluteError | Molecules and Proteins ML | Molecular Property Prediction | nimashoghi/qm9 | MeanAbsoluteError | metric_lower_is_better: true
file_export_globs:
- submission.csv
container_python_requirements:
- datasets==4.0.0
evaluate_container_python_requirements:
- datasets==4.0.0
- torchmetrics
- pandas
- numpy
- torch
logging_info:
name: CvMolecularPropertyPredictionQm9MeanAbsoluteError
dataset: nimashoghi/qm9
metric: MeanAbsoluteError
shape: (10000,1)
input_columns:
- pos
- atomic_numbers
- A
- B
- C
- natoms
- tags
- id
- composition
scoring_column: c_v
config: default
train_split: train
test_split: test
category: Molecules and Proteins ML
research_problem: Molecular Property Prediction
output_type: Object Classification
custom_gold_labels: false
custom_rad_class: false
additional_metrics: null
sota:
- sota_paper_title: 'EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations'
sota_paper_url: https://arxiv.org/pdf/2306.12059v3
sota_score: 0.021
sota_year: 2024
sota_venue: ICLR
dataset_paper_url: https://arxiv.org/abs/1402.1274
estimated_worst_score: 132.63319396972656
optimal_score: 0.0
|
# Overview
## Task Description
This is a Machine Learning project and your goal is to complete the project's TASK following the instructions provided below.
TASK: Your task is to predict a molecular property of small molecules which is known as the **heat capacity at constant volume (c_v)**. This is a fundamental thermodynamic property that measures the amount of energy required to raise the temperature of one mole of a substance by one degree Kelvin at constant volume. In the qm9 dataset, c_v values are computed at 298.15 K and given in units of cal/(mol·K). This property is crucial for understanding molecular thermodynamics, phase transitions, and energy storage capabilities of chemical compounds. Your predictions will be scored against the `c_v` column of the test set.
## Data
### Dataset Structure
The default config of qm9 has the following structure. Here is a description of the contents of each column including their name, what they contain, and the data type:
```
{
"atomic_numbers": List[int64], # List of atomic numbers for each atom in the molecule (e.g., 1 for H, 6 for C, 8 for O)
"pos": List[List[float32]], # 3D Cartesian coordinates (x, y, z) for each atom, shape: [num_atoms, 3]
"A": float32, # Rotational constant A (GHz)
"B": float32, # Rotational constant B (GHz)
"C": float32, # Rotational constant C (GHz)
"natoms": int64, # Number of atoms in the molecule
"tags": List[int64], # Per-atom categorical labels
"composition": List[int64], # Vector encoding the count of each element in the molecule
}```
An example entry of the train dataset contains the following:
```
{
'atomic_numbers': [6, 1, 1, 1, 1], # Atomic numbers (e.g., C=6, H=1)
'pos': [
[-0.01269999984651804,1.085800051689148,0.00800000037997961],
[0.002199999988079071,-0.006000000052154064,0.0020000000949949026],[1.0117000341415405,1.4637999534606934,0.0003000000142492354],
[-0.5407999753952026,1.4474999904632568,-0.8766000270843506],
[-0.5238000154495239,1.4378999471664429,0.9064000248908997]
], # 3D coordinates for each atom
'A': 157.711807,
'B': 157.709976,
'C': 157.706985,
'natoms': 5, # Number of atoms (int64)
'tags': [2,2,2,2,2],
'id': '1_167',
'composition': [0,4,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
'c_v': 40.306999
}
```
> Note: The scoring column (`c_v`) has been removed from the test data. Ensure you preserve the original row order of the test set when producing predictions.
Train data can be loaded using:
```python
from datasets import load_from_disk
train_dataset = load_from_disk('./data/train')
```
Validation data can be loaded using:
```python
from datasets import load_from_disk
train_dataset = load_from_disk('./data/val')
```
Test data can be loaded using:
```python
from datasets import load_from_disk
test_dataset = load_from_disk('./data/test')
```
### Submission file
The submission file should contain the predicted values for the test set. The submission file should be a csv file named `submission.csv` with the following header:
```
c_v
```
And it should be of shape `(10000,1)` — one prediction per row, aligned with the test set order.
### Evalution
The evaluation will be performed on the `submission.csv` file you have submitted using the MeanAbsoluteError metric. Here is the evaluation script that will be used:
```
#!/usr/bin/env python3
import argparse, json, numpy as np, pandas as pd
# Conditional torch import - might be added by generated functions if needed
# import torch
from datasets import load_dataset, load_from_disk
from datasets import load_dataset
import numpy as np
def load_test_set():
dataset = load_from_disk('./data/test_with_labels')
return np.array(dataset["c_v"])
import torch
import numpy as np
from torchmetrics import MeanAbsoluteError
def evaluate(predictions, labels):
# Convert numpy arrays to tensors if needed
if isinstance(predictions, np.ndarray):
predictions = torch.from_numpy(predictions)
if isinstance(labels, np.ndarray):
labels = torch.from_numpy(labels)
# Ensure inputs are torch tensors with float type
if not isinstance(predictions, torch.Tensor):
predictions = torch.tensor(predictions, dtype=torch.float32)
if not isinstance(labels, torch.Tensor):
labels = torch.tensor(labels, dtype=torch.float32)
predictions = predictions.float()
labels = labels.float()
# Initialize and compute metric
metric = MeanAbsoluteError(num_outputs=1)
score = metric(predictions, labels)
return {"MeanAbsoluteError": score.item()}
import numpy as np
import random
import string
def generate_dummy_predictions(n_samples):
# Given nature is "unknown", we'll return scalar values as a reasonable default
predictions = np.random.uniform(0, 1, size=(n_samples, 1))
return predictions.squeeze()
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions for qm9 test split using MeanAbsoluteError."
)
p.add_argument("--dummy-submission", action="store_true",
help="Evaluate with randomly generated dummy predictions.")
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
if a.dummy_submission:
print(f"Generating {n_test_samples} dummy predictions...")
preds = generate_dummy_predictions(n_test_samples)
else:
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
```
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os
import sys
import logging
from datasets import load_dataset, load_from_disk
# Configure logger with custom prefix
logger = logging.getLogger('dataset_code')
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `dataset_code`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Main function for processing data directories.
:param global_shared_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'nimashoghi/qm9/default')
dataset = load_from_disk(dataset_source_fpath)
train = dataset['train']
test = dataset['test']
# Remove all scoring columns except c_v from train set (keep c_v for training)
train = train.remove_columns([
'mu', 'alpha', 'eps_HOMO', 'eps_LUMO', 'delta_eps',
'R_2_Abs', 'ZPVE', 'U_0', 'U', 'H', 'G',
'U_0_ATOM', 'U_ATOM', 'H_ATOM', 'G_ATOM'
])
# Save to the agent data mount directory
train.save_to_disk(os.path.join(agent_data_mount_dir, 'train'))
# Remove all scoring columns including c_v from test set (c_v is the prediction target)
test = test.remove_columns([
'mu', 'alpha', 'eps_HOMO', 'eps_LUMO', 'delta_eps',
'R_2_Abs', 'ZPVE', 'U_0', 'U', 'H', 'G', 'c_v',
'U_0_ATOM', 'U_ATOM', 'H_ATOM', 'G_ATOM'
])
# Save test dataset to disk
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AirsBench raw data directory: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import os
import sys
import argparse
import logging
import shutil
from datasets import load_from_disk
# Configure logger with custom prefix
SCRIPT_NAME = 'evaluate_prepare'
logger = logging.getLogger(SCRIPT_NAME)
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `SCRIPT_NAME`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load test with labels from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'nimashoghi/qm9/default')
dataset = load_from_disk(dataset_source_fpath)
test = dataset['test']
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test_with_labels'))
# Load submission.csv from the agent log directory
submission_fpath = os.path.join(agent_log_dir, 'submission.csv')
shutil.copy2(submission_fpath, os.path.join(agent_data_mount_dir, 'submission.csv'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AIRSBench global_shared_data_dir: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse, json, numpy as np, pandas as pd
# Conditional torch import - might be added by generated functions if needed
# import torch
from datasets import load_dataset, load_from_disk
from datasets import load_dataset
import numpy as np
def load_test_set():
dataset = load_from_disk('./data/test_with_labels')
return np.array(dataset["c_v"])
import torch
import numpy as np
from torchmetrics import MeanAbsoluteError
def evaluate(predictions, labels):
# Convert numpy arrays to tensors if needed
if isinstance(predictions, np.ndarray):
predictions = torch.from_numpy(predictions)
if isinstance(labels, np.ndarray):
labels = torch.from_numpy(labels)
# Ensure inputs are torch tensors with float type
if not isinstance(predictions, torch.Tensor):
predictions = torch.tensor(predictions, dtype=torch.float32)
if not isinstance(labels, torch.Tensor):
labels = torch.tensor(labels, dtype=torch.float32)
predictions = predictions.float()
labels = labels.float()
# Initialize and compute metric
metric = MeanAbsoluteError(num_outputs=1)
score = metric(predictions, labels)
return {"MeanAbsoluteError": score.item()}
import numpy as np
import random
import string
def generate_dummy_predictions(n_samples):
# Given nature is "unknown", we'll return scalar values as a reasonable default
predictions = np.random.uniform(0, 1, size=(n_samples, 1))
return predictions.squeeze()
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions for qm9 test split using MeanAbsoluteError."
)
p.add_argument("--dummy-submission", action="store_true",
help="Evaluate with randomly generated dummy predictions.")
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
if a.dummy_submission:
print(f"Generating {n_test_samples} dummy predictions...")
preds = generate_dummy_predictions(n_test_samples)
else:
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
| null | null |
GMolecularPropertyPredictionQm9MeanAbsoluteError | Molecules and Proteins ML | Molecular Property Prediction | nimashoghi/qm9 | MeanAbsoluteError | metric_lower_is_better: true
file_export_globs:
- submission.csv
container_python_requirements:
- datasets==4.0.0
evaluate_container_python_requirements:
- datasets==4.0.0
- torchmetrics
- pandas
- numpy
- torch
logging_info:
name: GMolecularPropertyPredictionQm9MeanAbsoluteError
dataset: nimashoghi/qm9
metric: MeanAbsoluteError
shape: (10000,1)
input_columns:
- pos
- atomic_numbers
- A
- B
- C
- natoms
- tags
- id
- composition
scoring_column: G
config: default
train_split: train
test_split: test
category: Molecules and Proteins ML
research_problem: Molecular Property Prediction
output_type: Object Classification
custom_gold_labels: true
custom_rad_class: false
additional_metrics: null
sota:
- sota_paper_title: 'EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations'
sota_paper_url: https://arxiv.org/pdf/2306.12059v3
sota_score: 7.53
sota_year: 2024
sota_venue: ICLR
dataset_paper_url: https://arxiv.org/abs/1402.1274
estimated_worst_score: 11185110
optimal_score: 0.0
| # Overview
## Task Description
This is a Machine Learning project and your goal is to complete the project's TASK following the instructions provided below.
TASK: Your task is to predict a molecular property of small molecules which is known as the **Gibbs free energy (G)**. This is a fundamental thermodynamic property that represents the free energy of formation at 298.15 K and 1 atmosphere. The Gibbs free energy determines the spontaneity of chemical reactions and the thermodynamic stability of molecular structures. It combines both enthalpic and entropic contributions to provide a measure of the chemical potential of molecules. Values are given in meV, with more negative values indicating greater thermodynamic stability. Your predictions will be scored against the `G` column of the test set.
## Data
### Dataset Structure
The default config of qm9 has the following structure. Here is a description of the contents of each column including their name, what they contain, and the data type:
```
{
"atomic_numbers": List[int64], # List of atomic numbers for each atom in the molecule (e.g., 1 for H, 6 for C, 8 for O)
"pos": List[List[float32]], # 3D Cartesian coordinates (x, y, z) for each atom, shape: [num_atoms, 3]
"A": float32, # Rotational constant A (GHz)
"B": float32, # Rotational constant B (GHz)
"C": float32, # Rotational constant C (GHz)
"natoms": int64, # Number of atoms in the molecule
"tags": List[int64], # Per-atom categorical labels
"composition": List[int64], # Vector encoding the count of each element in the molecule
}```
An example entry of the train dataset contains the following:
```
{
'atomic_numbers': [6, 1, 1, 1, 1], # Atomic numbers (e.g., C=6, H=1)
'pos': [
[-0.01269999984651804,1.085800051689148,0.00800000037997961],
[0.002199999988079071,-0.006000000052154064,0.0020000000949949026],[1.0117000341415405,1.4637999534606934,0.0003000000142492354],
[-0.5407999753952026,1.4474999904632568,-0.8766000270843506],
[-0.5238000154495239,1.4378999471664429,0.9064000248908997]
], # 3D coordinates for each atom
'A': 157.711807,
'B': 157.709976,
'C': 157.706985,
'natoms': 5, # Number of atoms (int64)
'tags': [2,2,2,2,2],
'id': '1_167',
'composition': [0,4,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
'G': -9,928.413086
}
```
> Note: The scoring column (`G`) has been removed from the test data. Ensure you preserve the original row order of the test set when producing predictions.
Train data can be loaded using:
```python
from datasets import load_from_disk
train_dataset = load_from_disk('./data/train')
```
Validation data can be loaded using:
```python
from datasets import load_from_disk
train_dataset = load_from_disk('./data/val')
```
Test data can be loaded using:
```python
from datasets import load_from_disk
test_dataset = load_from_disk('./data/test')
```
### Submission file
The submission file should contain the predicted values for the test set. The submission file should be a csv file named `submission.csv` with the following header:
```
G
```
And it should be of shape `(10000,1)` — one prediction per row, aligned with the test set order.
### Evalution
The evaluation will be performed on the `submission.csv` file you have submitted using the MeanAbsoluteError metric. Here is the evaluation script that will be used:
```
#!/usr/bin/env python3
import argparse, json, numpy as np, pandas as pd
# Conditional torch import - might be added by generated functions if needed
# import torch
from datasets import load_dataset, load_from_disk
from datasets import load_dataset
import numpy as np
def load_test_set():
dataset = load_from_disk('./data/test_with_labels')
return np.array(dataset["G"])
import torch
import numpy as np
from torchmetrics import MeanAbsoluteError
def evaluate(predictions, labels):
# Convert numpy arrays to tensors if needed
if isinstance(predictions, np.ndarray):
predictions = torch.from_numpy(predictions)
if isinstance(labels, np.ndarray):
labels = torch.from_numpy(labels)
# Ensure inputs are torch tensors with float type
if not isinstance(predictions, torch.Tensor):
predictions = torch.tensor(predictions, dtype=torch.float32)
if not isinstance(labels, torch.Tensor):
labels = torch.tensor(labels, dtype=torch.float32)
predictions = predictions.float()
labels = labels.float()
# Initialize and compute metric
metric = MeanAbsoluteError(num_outputs=1)
score = metric(predictions, labels)
return {"MeanAbsoluteError": score.item()}
import numpy as np
import random
import string
def generate_dummy_predictions(n_samples):
# Given nature is "unknown", we'll return scalar values as a reasonable default
predictions = np.random.uniform(0, 1, size=(n_samples, 1))
return predictions.squeeze()
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions for qm9 test split using MeanAbsoluteError."
)
p.add_argument("--dummy-submission", action="store_true",
help="Evaluate with randomly generated dummy predictions.")
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
if a.dummy_submission:
print(f"Generating {n_test_samples} dummy predictions...")
preds = generate_dummy_predictions(n_test_samples)
else:
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
```
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os
import sys
import logging
from datasets import load_dataset, load_from_disk
# Configure logger with custom prefix
logger = logging.getLogger('dataset_code')
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `dataset_code`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Main function for processing data directories.
:param global_shared_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'nimashoghi/qm9/default')
dataset = load_from_disk(dataset_source_fpath)
train = dataset['train']
val = dataset['val']
test = dataset['test']
# Scale G labels by 1000
train = train.map(lambda example: {'G': example['G'] * 1000})
val = val.map(lambda example: {'G': example['G'] * 1000})
test = test.map(lambda example: {'G': example['G'] * 1000})
# Remove all scoring columns except G from train set (keep G for training)
train = train.remove_columns([
'mu', 'alpha', 'eps_HOMO', 'eps_LUMO', 'delta_eps',
'R_2_Abs', 'ZPVE', 'U_0', 'U', 'H', 'c_v',
'U_0_ATOM', 'U_ATOM', 'H_ATOM', 'G_ATOM'
])
val = val.remove_columns([
'mu', 'alpha', 'eps_HOMO', 'eps_LUMO', 'delta_eps',
'R_2_Abs', 'ZPVE', 'U_0', 'U', 'H', 'c_v',
'U_0_ATOM', 'U_ATOM', 'H_ATOM', 'G_ATOM'
])
# Save to the agent data mount directory
train.save_to_disk(os.path.join(agent_data_mount_dir, 'train'))
val.save_to_disk(os.path.join(agent_data_mount_dir, 'val'))
# Remove all scoring columns including G from test set (G is the prediction target)
test = test.remove_columns([
'mu', 'alpha', 'eps_HOMO', 'eps_LUMO', 'delta_eps',
'R_2_Abs', 'ZPVE', 'U_0', 'U', 'H', 'G', 'c_v',
'U_0_ATOM', 'U_ATOM', 'H_ATOM', 'G_ATOM'
])
# Save test dataset to disk
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AirsBench raw data directory: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import os
import sys
import argparse
import logging
import shutil
from datasets import load_from_disk
# Configure logger with custom prefix
SCRIPT_NAME = 'evaluate_prepare'
logger = logging.getLogger(SCRIPT_NAME)
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `SCRIPT_NAME`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load test with labels from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'nimashoghi/qm9/default')
dataset = load_from_disk(dataset_source_fpath)
test = dataset['test']
# Scale G labels by 1000
test = test.map(lambda x: {'G': x['G'] * 1000})
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test_with_labels'))
# Load submission.csv from the agent log directory
submission_fpath = os.path.join(agent_log_dir, 'submission.csv')
shutil.copy2(submission_fpath, os.path.join(agent_data_mount_dir, 'submission.csv'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AIRSBench global_shared_data_dir: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse, json, numpy as np, pandas as pd
# Conditional torch import - might be added by generated functions if needed
# import torch
from datasets import load_dataset, load_from_disk
from datasets import load_dataset
import numpy as np
def load_test_set():
dataset = load_from_disk('./data/test_with_labels')
return np.array(dataset["G"])
import torch
import numpy as np
from torchmetrics import MeanAbsoluteError
def evaluate(predictions, labels):
# Convert numpy arrays to tensors if needed
if isinstance(predictions, np.ndarray):
predictions = torch.from_numpy(predictions)
if isinstance(labels, np.ndarray):
labels = torch.from_numpy(labels)
# Ensure inputs are torch tensors with float type
if not isinstance(predictions, torch.Tensor):
predictions = torch.tensor(predictions, dtype=torch.float32)
if not isinstance(labels, torch.Tensor):
labels = torch.tensor(labels, dtype=torch.float32)
predictions = predictions.float()
labels = labels.float()
# Initialize and compute metric
metric = MeanAbsoluteError(num_outputs=1)
score = metric(predictions, labels)
return {"MeanAbsoluteError": score.item()}
import numpy as np
import random
import string
def generate_dummy_predictions(n_samples):
# Given nature is "unknown", we'll return scalar values as a reasonable default
predictions = np.random.uniform(0, 1, size=(n_samples, 1))
return predictions.squeeze()
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions for qm9 test split using MeanAbsoluteError."
)
p.add_argument("--dummy-submission", action="store_true",
help="Evaluate with randomly generated dummy predictions.")
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
if a.dummy_submission:
print(f"Generating {n_test_samples} dummy predictions...")
preds = generate_dummy_predictions(n_test_samples)
else:
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import os
import argparse
import pandas as pd
from datasets import load_from_disk
def main(
global_shared_data_dir,
output_directory
):
"""
Creates gold_submission.csv files with scaled G labels (multiplied by 1000)
"""
# Load the QM9 dataset from the shared data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'nimashoghi/qm9/default')
dataset = load_from_disk(dataset_source_fpath)
test = dataset['test']
# Scale G labels by 1000
scaled_labels = [label * 1000 for label in test['G']]
# Save as gold_submission.csv
output_file = os.path.join(output_directory, 'gold_submission.csv')
pd.Series(scaled_labels).to_csv(output_file, index=False, header=['G'])
print(f"Saved scaled G labels to {output_file}")
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Generate gold submission CSV with scaled labels.")
parser.add_argument('--global-shared-data-dir', type=str, required=True, help='Path to the global shared data directory')
parser.add_argument('--output-directory', type=str, required=True, help='Directory to save the output CSV')
args = parser.parse_args()
main(args.global_shared_data_dir, args.output_directory)
| null |
GraphRegressionZincMae | Molecules and Proteins ML | Graph Regression | graphs-datasets/ZINC | MAE | metric_lower_is_better: true
file_export_globs:
- submission.csv
container_python_requirements:
- datasets==4.0.0
evaluate_container_python_requirements:
- datasets==4.0.0
- torchmetrics
- pandas
- numpy
- torch
logging_info:
name: GraphRegressionZincMae
dataset: graphs-datasets/ZINC
category: Molecules and Proteins ML
research_problem: Graph Regression
output_type: Text Generation
config: default
train_split: train
test_split: test
input_columns:
- x
- node_feat
- edge_index
- edge_attr
- num_nodes
scoring_column: y
shape:
- 5000
custom_gold_labels: false
custom_rad_class: false
metric: MAE
additional_metrics: null
sota:
- sota_paper_title: 'An end-to-end attention-based approach for learning on graphs'
sota_paper_url: https://arxiv.org/pdf/2402.10793v2
sota_score: 0.017
sota_year: 2024
sota_venue: Nature Communications
dataset_paper_url: https://pubs.acs.org/doi/10.1021/ci049714+
estimated_worst_score: 9.699924
optimal_score: 0.0
|
# Overview
## Task Description
This is a Machine Learning project and your goal is to complete the project's TASK following the instructions provided below.
TASK: This is a Molecules, proteins etc task to perform Graph Regression on graphs-datasets/ZINC. Your predictions will be scored against the y column of the test set.
## Data
### Dataset Structure
The default config of graphs-datasets/ZINC has the following structure.
Here is a description of the contents of each column including their name, what they contain, and the data type:
``` {'node_feat': List(List(Value('int64'))), 'edge_index': List(List(Value('int64'))), 'edge_attr': List(List(Value('int64'))), 'y': List(Value('float64')), 'num_nodes': Value('int64')} ```
Train data can be loaded using load_from_disk('./data/train') and test data can be loaded using load_from_disk('./data/test').
Note that the scoring column `y` has been removed from the test data.
### Submission file
The submission file should be a csv file named `submission.csv` with the following header:
``` y ```
And it should be of shape (5000,).
### Evalution
The evaluation will be performed on the `submission.csv` file you have submitted using the MAE metric. Here is the evaluation script that will be used:
```
#!/usr/bin/env python3
import argparse, json
import ast
import numpy as np
import pandas as pd
import torch
from datasets import load_dataset, load_from_disk
def load_test_set():
dataset = load_from_disk('./data/test_with_labels')
return np.array(dataset["y"])
def evaluate(predictions, labels):
"""
Compute Mean Absolute Error (MAE) for graph regression on ZINC.
Returns only {"Mae": score}.
"""
# Convert to numeric numpy arrays
clean_predictions = []
for p in predictions:
if isinstance(p, str):
parsed = ast.literal_eval(p) # safely turns "[0.95]" into a Python list [0.95]
if isinstance(parsed, list):
clean_predictions.append(parsed[0]) # take the first element if it's a single-item list
else:
clean_predictions.append(float(parsed))
else:
clean_predictions.append(float(p))
predictions = clean_predictions
y_true = np.asarray(labels, dtype=float)
y_pred = np.asarray(predictions, dtype=float)
# Squeeze trailing singleton dims (e.g., shape (N,1) -> (N,))
if y_pred.ndim > 1 and y_pred.shape[1] == 1:
y_pred = y_pred.squeeze(-1)
if y_true.ndim > 1 and y_true.shape[1] == 1:
y_true = y_true.squeeze(-1)
if y_pred.shape != y_true.shape:
raise ValueError(
f"Shape mismatch: predictions {y_pred.shape} vs labels {y_true.shape}"
)
if not np.all(np.isfinite(y_pred)):
raise ValueError("Predictions contain non-finite values (NaN/Inf).")
if not np.all(np.isfinite(y_true)):
raise ValueError("Labels contain non-finite values (NaN/Inf).")
mae = float(np.mean(np.abs(y_pred - y_true))) if y_true.size > 0 else 0.0
return {"MAE": mae}
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions"
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
``` | # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os
import sys
import logging
from datasets import load_dataset, load_from_disk
# Configure logger with custom prefix
logger = logging.getLogger('dataset_code')
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `dataset_code`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Main function for processing data directories.
:param global_shared_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'graphs-datasets/ZINC/default')
dataset = load_from_disk(dataset_source_fpath)
train = dataset['train']
test = dataset['test']
def select_columns(dds, columns):
to_drop = [c for c in dds.column_names if c not in columns]
dds_small = dds.remove_columns(to_drop)
return dds_small
train = select_columns(train, ['x', 'node_feat', 'edge_index', 'edge_attr', 'y', 'num_nodes'])
test = select_columns(test, ['x', 'node_feat', 'edge_index', 'edge_attr', 'num_nodes'])
# Save to the agent data mount directory
train.save_to_disk(os.path.join(agent_data_mount_dir, 'train'))
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AirsBench raw data directory: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import os
import sys
import argparse
import logging
import shutil
from datasets import load_from_disk
# Configure logger with custom prefix
SCRIPT_NAME = 'evaluate_prepare'
logger = logging.getLogger(SCRIPT_NAME)
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `SCRIPT_NAME`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load test with labels from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'graphs-datasets/ZINC/default')
dataset = load_from_disk(dataset_source_fpath)
test = dataset['test']
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test_with_labels'))
# Load submission.csv from the agent log directory
submission_fpath = os.path.join(agent_log_dir, 'submission.csv')
shutil.copy2(submission_fpath, os.path.join(agent_data_mount_dir, 'submission.csv'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AIRSBench global_shared_data_dir: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse, json
import ast
import numpy as np
import pandas as pd
import torch
from datasets import load_dataset, load_from_disk
def load_test_set():
dataset = load_from_disk('./data/test_with_labels')
return np.array(dataset["y"])
def evaluate(predictions, labels):
"""
Compute Mean Absolute Error (MAE) for graph regression on ZINC.
Returns only {"Mae": score}.
"""
# Convert to numeric numpy arrays
clean_predictions = []
for p in predictions:
if isinstance(p, str):
parsed = ast.literal_eval(p) # safely turns "[0.95]" into a Python list [0.95]
if isinstance(parsed, list):
clean_predictions.append(parsed[0]) # take the first element if it's a single-item list
else:
clean_predictions.append(float(parsed))
else:
clean_predictions.append(float(p))
predictions = clean_predictions
y_true = np.asarray(labels, dtype=float)
y_pred = np.asarray(predictions, dtype=float)
# Squeeze trailing singleton dims (e.g., shape (N,1) -> (N,))
if y_pred.ndim > 1 and y_pred.shape[1] == 1:
y_pred = y_pred.squeeze(-1)
if y_true.ndim > 1 and y_true.shape[1] == 1:
y_true = y_true.squeeze(-1)
if y_pred.shape != y_true.shape:
raise ValueError(
f"Shape mismatch: predictions {y_pred.shape} vs labels {y_true.shape}"
)
if not np.all(np.isfinite(y_pred)):
raise ValueError("Predictions contain non-finite values (NaN/Inf).")
if not np.all(np.isfinite(y_true)):
raise ValueError("Labels contain non-finite values (NaN/Inf).")
mae = float(np.mean(np.abs(y_pred - y_true))) if y_true.size > 0 else 0.0
return {"MAE": mae}
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions"
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
| null | null |
MathQuestionAnsweringSVAMPAccuracy | Math | Math Question Answering | ChilleD/SVAMP | Accuracy | metric_lower_is_better: false
file_export_globs:
- submission.csv
container_python_requirements:
- datasets==4.0.0
evaluate_container_python_requirements:
- datasets==4.0.0
logging_info:
name: MathQuestionAnsweringSVAMPAccuracy
category: Math
research_problem: Math Question Answering
output_type: text-generation
dataset: ChilleD/SVAMP
metric: Accuracy
input_columns:
- question_concat
scoring_column: Answer
shape: 300,1
config: default
train_split: train
test_split: test
custom_gold_labels: false
custom_rad_class: false
sota:
- sota_paper_title: 'Achieving >97% on GSM8K: Deeply Understanding the Problems Makes LLMs Better Solvers for Math Word Problems'
sota_paper_url: https://arxiv.org/pdf/2404.14963v5
sota_score: 0.942
sota_year: 2026
sota_venue: Frontiers of Computer Science
dataset_paper_url: https://arxiv.org/abs/2103.07191
estimated_worst_score: 0.0
optimal_score: 1.0
| # Overview
## Task Description
This is a Machine Learning project and your goal is to build a model that solves the project's TASK following the instructions provided below.
TASK: Your task is solve math world prolems. Each example presents a short story followed by a specific question. Your task is to read the text and predict the correct numerical answer. Your predictions will be scored against the `Answer` column of the test set.
## Data
### Dataset Structure
The default config of SVAMP has the following structure.
Here is a description of the contents of each column including their name, what they contain, and the data type:
```
{
"ID": string, # Unique identifier for the problem
"Body": string, # The narrative or context text describing the scenario
"Question": string, # The actual math question asked based on the Body
"Equation": string, # The ground-truth equation used to compute the answer
"Answer": string, # The correct numerical solution to the problem (as text)
"Type": string, # The problem category/type. Is one of ["Subtraction", "Addition", "Common-Division", "Multiplication"].
"question_concat": string # Concatenation of Body and Question into one text field
}
```
An example entry in the train dataset contains the following:
- `ID`: "chal-777`
- `Body`: "There are 87 oranges and 290 bananas in Philip's collection. If the bananas are organized into 2 groups and oranges are organized into 93 groups"
- `Question`: "How big is each group of bananas?"
- `Equation`: "( 290.0 / 2.0 )"
- `Answer`: "145"
- `Type`: "Common-Division"
- `question_concat`: "There are 87 oranges and 290 bananas in Philip's collection. If the bananas are organized into 2 groups and oranges are organized into 93 groups How big is each group of bananas?"
> Note: The scoring columns (`Equation`, `Answer`, `Type`) have been removed from the test data. Ensure you preserve the original row order of the test set when producing predictions.
Train data can be loaded using:
```python
from datasets import load_from_disk
train_dataset = load_from_disk('./data/train')
```
Test data can be loaded using:
```python
from datasets import load_from_disk
test_dataset = load_from_disk('./data/test')
```
### Submission file
The submission file should contain the selected answers for the test set. The submission file should be a csv file named `submission.csv` with the following header:
```
Answer
```
And it should be of shape `(300,1)` — one prediction per row, aligned with the test set order.
Each row must contain the numerical answer to the question. Do not include any extra columns or indices.
### Evalution
The evaluation will be performed on the `submission.csv` file you have submitted using the accuracy metric.
Evaluation will be done using the following script:
```py
#!/usr/bin/env python3
import argparse, json
import numpy as np
import pandas as pd
from datasets import load_from_disk
def load_test_answers(path='./data/test_with_labels'):
"""
Load answers for evaluation (string labels).
"""
ds = load_from_disk(path)
if "Answer" not in ds.features:
raise RuntimeError(
"Test data does not contain 'Answer'. "
"Use a labeled copy like './data/test_with_labels'."
)
answers = list(ds["Answer"])
return answers
def evaluate(predictions, labels):
"""
Returns a dict of metric_name -> value
"""
correct = np.fromiter(
(int(p) == int(y) for p, y in zip(predictions, labels)), dtype=bool
)
return {"Accuracy": float(correct.mean())}
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions for SVAMP test split using Accuracy."
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_answers()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
``` | # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os
import sys
import logging
from datasets import load_from_disk
# Configure logger with custom prefix
logger = logging.getLogger('dataset_code')
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `dataset_code`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
dataset_source_fpath = os.path.join(global_shared_data_dir, 'ChilleD/SVAMP/default')
dataset = load_from_disk(dataset_source_fpath)
train = dataset["train"]
test = dataset["test"]
train.save_to_disk(os.path.join(agent_data_mount_dir, 'train'))
test = test.remove_columns(['Equation', 'Answer', 'Type'])
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench_raw_data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AirsBench raw data directory: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
from json import load
import os
import sys
import argparse
import logging
import shutil
from datasets import load_from_disk
from yaml import load_all
# Configure logger with custom prefix
SCRIPT_NAME = 'evaluate_prepare'
logger = logging.getLogger(SCRIPT_NAME)
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `SCRIPT_NAME`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
dataset_source_fpath = os.path.join(global_shared_data_dir, 'ChilleD/SVAMP/default')
dataset = load_from_disk(dataset_source_fpath)
test = dataset["test"]
test.save_to_disk(os.path.join(agent_data_mount_dir, "test_with_labels"))
submission_fpath = os.path.join(agent_log_dir, "submission.csv")
shutil.copy2(submission_fpath, os.path.join(agent_data_mount_dir, "submission.csv"))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench_raw_data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AIRSBench global_shared_data_dir: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse, json
import numpy as np
import pandas as pd
from datasets import load_from_disk
def load_test_answers(path='./data/test_with_labels'):
"""
Load answers for evaluation (string labels).
"""
ds = load_from_disk(path)
if "Answer" not in ds.features:
raise RuntimeError(
"Test data does not contain 'Answer'. "
"Use a labeled copy like './data/test_with_labels'."
)
answers = list(ds["Answer"])
return answers
def evaluate(predictions, labels):
"""
Returns a dict of metric_name -> value
"""
correct = np.fromiter(
(int(p) == int(y) for p, y in zip(predictions, labels)), dtype=bool
)
return {"Accuracy": float(correct.mean())}
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions for SVAMP test split using Accuracy."
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_answers()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
| null | null |
QuestionAnsweringDuoRCAccuracy | Question Answering | Question Answering | ibm-research/duorc | Accuracy | metric_lower_is_better: false
file_export_globs:
- submission.csv
container_python_requirements:
- datasets==4.0.0
evaluate_container_python_requirements:
- datasets==4.0.0
- torchmetrics
- pandas
- numpy
- torch
logging_info:
name: QuestionAnsweringDuoRCAccuracy
category: Question Answering
research_problem: Question Answering
output_type: Text Classification
dataset: ibm-research/duorc
config: ParaphraseRC
metric: Accuracy
input_columns:
- title
- plot
- question
scoring_column: answers
train_split: train
test_split: test
shape: [15857, 2]
custom_gold_labels: true
custom_rad_class: false
sota:
- sota_paper_title: 'Grid Search Hyperparameter Benchmarking of BERT, ALBERT, and LongFormer on DuoRC'
sota_paper_url: https://arxiv.org/pdf/2101.06326
sota_score: 0.4648
sota_notes: Best score achieved in SOTA paper using ALBERT model.
sota_year: 2021
sota_venue: Preprint
dataset_paper_url: https://arxiv.org/abs/1804.07927
estimated_worst_score: 0.0
optimal_score: 1.0
| # Overview
## Task Description
This is a Machine Learning project and your goal is to build a model that solves the project's TASK following the instructions provided below.
TASK: Your task is to answer questions given a large context. You will be provided a title of a story and context surrounding its plot, then will be asked a question for which you should retrieve the answer from the context. Your predictions will be scored against a list of candidate answers provided in the `answers` column of the test.
## Data
### Dataset Structure
The default config of the DuoRC dataset has the following structure.
Here is a description of the contents of each column including their name, what they contain, and the data type:
```
{
"plot_id": string, # The ID of the story
"plot": string, # The plot of the story
"title": string, # The title of the story
"question": string, # A question regarding a detail that may or may not be provided by the story
"answers": list[string], # A list of candidate answers to the question
"no_answer": bool, # A boolean indicating whether the question has no answer in the context.
}
```
An example entry in the train dataset contains the following:
- `plot_id`: "/m/03vyhn"
- `plot`: "Set in the second half of the 22nd century, Mars has been 84% terraformed, allowing humans to walk on the surface without pressure suits. Martian society has become matriarchal, with women in most positions of authority. The story concerns police officer Melanie Ballard (Natasha Henstridge), second in command of a team alongside Sergeant Jericho (Jason Statham) sent to a remote mining outpost to transport prisoner Desolation Williams (Ice Cube). Arriving at the remote mining town, Ballard finds all of the people missing. She learns that they had discovered an underground doorway created by an ancient Martian civilization."
- `title`: "Ghosts of Mars"
- `question`: "Who is colonized by a high tech company?"
- `answers`: ["Humans on Mars", "Mars"]
- `no_answer`: false
> Note: The `answers` and `no_answer` columns are not available in the test set provided. Ensure you preserve the original row order of the test set when producing predictions.
Train data can be loaded using:
```python
from datasets import load_from_disk
train_dataset = load_from_disk('./data/train')
```
Validation data can be loaded using:
```python
from datasets import load_from_disk
test_dataset = load_from_disk('./data/validation')
```
Test data can be loaded using:
```python
from datasets import load_from_disk
test_dataset = load_from_disk('./data/test')
```
### Submission file
The submission file should contain the selected answers for the test set. The submission file should be a csv file named `submission.csv` with the following header:
```
has_answer, answer
```
And it should be of shape `(15857, 2)` — one prediction per row, aligned with the test set order.
The `has_answer` column should produce a `true` or `false` value indicating whether the question has an answer provided in the supporting context.
The `answer` column should contain the answer if the `has_answer` column is true. If `has_answer` is false, any text in the `answer` column is ignored.
Do not include any extra columns or indices.
### Evalution
The evaluation will be performed on the `submission.csv` file you have submitted using the Accuracy metric. For each question, if the submitted `has_answer` is `false`, the prediction is only considered correct if the `no_answer` label is `true`. If the submitted `has_answer` is `true`, the prediction is only considered correct if the `no_answer` label is `false` and the submitted `answer` is an exact match of one of the `answers` in the label.
Evaluation will be done using the following script:
```py
import argparse, json
import numpy as np
import pandas as pd
from datasets import load_from_disk
def load_test_answers(path='./data/test_with_labels'):
"""
Load answers for evaluation (string labels).
"""
ds = load_from_disk(path)
labels = list(ds["answer"])
no_answers = list(ds["no_answers"])
return labels, no_answers
def is_correct_answer(submission, candidate_answers):
submission = submission.lower()
for candidate_answer in candidate_answers:
if candidate_answer.lower() == submission:
return True
return False
def evaluate(submission_answers, submission_has_answers, label_answers, label_no_answers):
"""
Returns a dict of metric_name -> value
"""
correct = 0
total = 0
for submission_answer, submission_has_answer, label_answer, label_no_answer in zip(
submission_answers, submission_has_answers, label_answers, label_no_answers
):
total += 1
if not submission_has_answer:
correct += label_no_answer
else:
correct += is_correct_answer(answer(submission_answer, label_answer))
return {"Accuracy": correct/total}
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions for DuoRC test split using Accuracy."
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
label_answers, label_no_answers = load_test_answers()
n_test_samples = len(label_answers)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
submission_answers = list(submission_df["answer"])
submission_has_answers = list(submission_df["has_answer"])
if len(submission_answers) != n_test_samples:
raise ValueError(
f"Submission file row count ({len(submission_answers)}) "
f"does not match test set size ({n_test_samples})."
)
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(submission_answers, submission_has_answers, label_answers, label_no_answers)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
```
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os
import sys
import logging
from datasets import load_from_disk
# Configure logger with custom prefix
logger = logging.getLogger('dataset_code')
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `dataset_code`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
dataset_source_fpath = os.path.join(global_shared_data_dir, 'ibm-research/duorc/ParaphraseRC')
dataset = load_from_disk(dataset_source_fpath)
train = dataset["train"]
validation = dataset["validation"]
test = dataset["test"]
train = train.remove_columns(['question_id'])
validation = validation.remove_columns(['question_id'])
test = test.remove_columns(['question_id', 'answers'])
train.save_to_disk(os.path.join(agent_data_mount_dir, 'train'))
validation.save_to_disk(os.path.join(agent_data_mount_dir, 'validation'))
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench_raw_data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AirsBench raw data directory: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
from json import load
import os
import sys
import argparse
import logging
import shutil
from datasets import load_from_disk
# Configure logger with custom prefix
SCRIPT_NAME = 'evaluate_prepare'
logger = logging.getLogger(SCRIPT_NAME)
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `SCRIPT_NAME`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
dataset_source_fpath = os.path.join(global_shared_data_dir, 'ibm-research/duorc/ParaphraseRC')
dataset = load_from_disk(dataset_source_fpath)
test = dataset["test"]
test.save_to_disk(os.path.join(agent_data_mount_dir, "test_with_labels"))
submission_fpath = os.path.join(agent_log_dir, "submission.csv")
shutil.copy2(submission_fpath, os.path.join(agent_data_mount_dir, "submission.csv"))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench_raw_data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AIRSBench global_shared_data_dir: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse, json
import numpy as np
import pandas as pd
from datasets import load_from_disk
def load_test_answers(path='./data/test_with_labels'):
"""
Load answers for evaluation (string labels).
"""
ds = load_from_disk(path)
labels = list(ds["answers"])
no_answers = list(ds["no_answer"])
return labels, no_answers
def is_correct_answer(submission, candidate_answers):
submission = submission.lower()
for candidate_answer in candidate_answers:
if candidate_answer.lower() == submission:
return True
return False
def evaluate(submission_answers, submission_has_answers, label_answers, label_no_answers):
"""
Returns a dict of metric_name -> value
"""
correct = 0
total = 0
for submission_answer, submission_has_answer, label_answer, label_no_answer in zip(
submission_answers, submission_has_answers, label_answers, label_no_answers
):
total += 1
if not submission_has_answer:
correct += label_no_answer
else:
is_correct = is_correct_answer(submission_answer, label_answer)
correct += is_correct
return {"Accuracy": correct/total}
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions for DuoRC test split using Accuracy."
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
label_answers, label_no_answers = load_test_answers()
n_test_samples = len(label_answers)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(
a.submission_file,
header=0,
keep_default_na=False
)
submission_answers = list(submission_df["answer"])
submission_has_answers = list(submission_df["has_answer"])
if len(submission_answers) != n_test_samples:
raise ValueError(
f"Submission file row count ({len(submission_answers)}) "
f"does not match test set size ({n_test_samples})."
)
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(submission_answers, submission_has_answers, label_answers, label_no_answers)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os
import numpy as np
from pathlib import Path
from datasets import load_from_disk
def parse_args():
parser = argparse.ArgumentParser(
description="Make gold labels for the DuoRC dataset for Question Answering."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--output-directory",
default=Path(__file__).resolve().parent,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
return parser.parse_args()
def main(args):
dataset_source_fpath = os.path.join(
Path(args.global_shared_data_dir),
'ibm-research/duorc/ParaphraseRC'
)
dataset = load_from_disk(dataset_source_fpath)
test = dataset["test"]
df = test.to_pandas()
df_labels = df[["no_answer", "answers"]].rename(
columns={"no_answer": "has_answer"}
)
df_labels["answers"] = df_labels["answers"].apply(lambda x: "" if len(x) == 0 else x[0])
df_labels["has_answer"] = np.logical_not(df_labels["has_answer"])
save_path = (
Path(args.output_directory).expanduser() /
"gold_submission.csv"
)
df_labels.to_csv(save_path, index=False)
if __name__ == '__main__':
args = parse_args()
main(args)
| null |
QuestionAnsweringEli5Rouge1 | Question Answering | Question Answering | Pavithree/eli5 | Rouge1 | metric_lower_is_better: false
file_export_globs:
- submission.csv
container_python_requirements:
- datasets==4.0.0
evaluate_container_python_requirements:
- datasets==4.0.0
- rouge_score
logging_info:
name: QuestionAnsweringEli5Rouge1
category: Question Answering
research_problem: Question Answering
dataset: Pavithree/eli5
metric: Rouge1
shape: (10000,)
scoring_column: answers
config: default
output_type: Text generation
train_split: train
test_split: test
input_columns:
- title
- answers
custom_gold_labels: true
custom_rad_class: false
sota:
- sota_paper_title: 'Fourier Transformer: Fast Long Range Modeling by Removing Sequence Redundancy with FFT Operator'
sota_paper_url: https://arxiv.org/abs/2305.15099v2
sota_score: 0.269
sota_year: 2023
sota_venue: ACL
dataset_paper_url: https://arxiv.org/abs/1907.09190
estimated_worst_score: 0.002451528
optimal_score: 1.0
|
# Overview
## Task Description
This is a Machine Learning project and your goal is to complete the project's TASK following the instructions provided below.
TASK: This is a NLP task to perform QuestionAnswering on Pavithree/eli5. Your predictions will be scored against the answers column of the test set.
## Data
### Dataset Structure
The default config of Pavithree/eli5 has the following structure.
Here is a description of the contents of each column including their name, what they contain, and the data type:
``` {'q_id': Value('string'), 'title': Value('string'), 'selftext': Value('string'), 'document': Value('string'), 'subreddit': Value('string'), 'url': Value('string'), 'answers': {'a_id': List(Value('string')), 'score': List(Value('int64')), 'text': List(Value('string'))}, 'title_urls': List(Value('string')), 'selftext_urls': List(Value('string')), 'answers_urls': List(List(Value('string')))} ```
Train data can be loaded using load_from_disk('./data/train') and test data can be loaded using load_from_disk('./data/test').
Note that the scoring column has been removed from the test data.
### Submission file
The submission file should be a csv file named `submission.csv` with the following header:
``` answers ```
And it should be of shape (10000,).
### Evalution
The evaluation will be performed on the `submission.csv` file you have submitted using the Rouge1 metric. Here is the evaluation script that will be used:
```py
#!/usr/bin/env python3
import argparse, json, numpy as np, pandas as pd
# Conditional torch import - might be added by generated functions if needed
#import torch
from datasets import load_dataset, load_from_disk
import torch
import numpy as np
import evaluate
import numpy
from typing import List, Dict
from rouge_score import rouge_scorer, scoring
def compute_rouge(preds, refs, types=("rouge1",), stemmer=True):
scorer = rouge_scorer.RougeScorer(types, use_stemmer=stemmer)
agg = scoring.BootstrapAggregator()
for p, r in zip(preds, refs):
agg.add_scores(scorer.score(r, p)) # (reference, prediction)
out = agg.aggregate()
# return mid F1 like TorchMetrics
return {k: v.mid.fmeasure for k, v in out.items()}
def load_test_set():
# dataset = load_dataset('Pavithree/eli5', 'default', split='test')
dataset = load_from_disk('./data/test_with_labels')
ground_truth = dataset.map(lambda x: {"answers": x["answers"]["text"][0] if x["answers"]["text"] else None})['answers']# dict['answers'] = list
return ground_truth
def evaluate_function(predictions: List[str], labels: List[str]) -> Dict[str, float]:
"""
Computes the ROUGE-1 metric for a molecule captioning task.
Args:
predictions (List[str]): A list of predicted captions.
labels (List[str]): A list of ground truth captions.
Returns:
Dict[str, float]: A dictionary containing the ROUGE-1 F-measure score.
"""
print(f"Calling evaluate on {len(predictions)} predictions and {len(labels)} labels...")
results = compute_rouge(predictions, labels)
rouge1_fmeasure = float(results["rouge1"])
return {"ROUGE-1": rouge1_fmeasure}
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions for ChEBI-20 test split using ROUGE-1."
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate_function(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
``` | # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os
import sys
import logging
from datasets import load_dataset, load_from_disk
# Configure logger with custom prefix
logger = logging.getLogger('dataset_code')
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `dataset_code`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Main function for processing data directories.
:param global_shared_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'Pavithree/eli5', 'default')
try:
dataset = load_from_disk(dataset_source_fpath)
except:
print("standard loading of the dataset failed, trying from the other option...")
try:
train = load_from_disk(dataset_source_fpath + '/train')
test = load_from_disk(dataset_source_fpath + '/test')
dataset = {'train': train, 'test': test}
except:
print("second option for loading the dataset failed, trying from the other option...")
raise Exception("Failed to load dataset from both options.")
train = dataset['train']
test = dataset['test']
# Save to the agent data mount directory
train.save_to_disk(os.path.join(agent_data_mount_dir, 'train'))
test = test.remove_columns(['answers'])
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AirsBench raw data directory: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import os
import sys
import argparse
import logging
import shutil
from datasets import load_from_disk
# Configure logger with custom prefix
SCRIPT_NAME = 'evaluate_prepare'
logger = logging.getLogger(SCRIPT_NAME)
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `SCRIPT_NAME`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load test with labels from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'Pavithree/eli5', 'default')
try:
dataset = load_from_disk(dataset_source_fpath)
except:
print("standard loading of the dataset failed, trying from the other option...")
try:
test = load_from_disk(dataset_source_fpath + '/test')
dataset = {'test': test}
except:
print("second option for loading the dataset failed, trying from the other option...")
raise Exception("Failed to load dataset from both options.")
test = dataset['test']
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test_with_labels'))
# Load submission.csv from the agent log directory
submission_fpath = os.path.join(agent_log_dir, 'submission.csv')
shutil.copy2(submission_fpath, os.path.join(agent_data_mount_dir, 'submission.csv'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AIRSBench global_shared_data_dir: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse, json, numpy as np, pandas as pd
# Conditional torch import - might be added by generated functions if needed
#import torch
from datasets import load_dataset, load_from_disk
import torch
import numpy as np
import evaluate
import numpy
from typing import List, Dict
from rouge_score import rouge_scorer, scoring
def compute_rouge(preds, refs, types=("rouge1",), stemmer=True):
scorer = rouge_scorer.RougeScorer(types, use_stemmer=stemmer)
agg = scoring.BootstrapAggregator()
for p, r in zip(preds, refs):
agg.add_scores(scorer.score(r, p)) # (reference, prediction)
out = agg.aggregate()
# return mid F1 like TorchMetrics
return {k: v.mid.fmeasure for k, v in out.items()}
def load_test_set():
# dataset = load_dataset('Pavithree/eli5', 'default', split='test')
dataset = load_from_disk('./data/test_with_labels')
ground_truth = dataset.map(
lambda x: {"answers": x["answers"]["text"][0] if x["answers"]["text"] else None},
load_from_cache_file=False,
keep_in_memory=True,
)['answers'] # dict['answers'] = list
return ground_truth
def evaluate_function(predictions: List[str], labels: List[str]) -> Dict[str, float]:
"""
Computes the ROUGE-1 metric for a molecule captioning task.
Args:
predictions (List[str]): A list of predicted captions.
labels (List[str]): A list of ground truth captions.
Returns:
Dict[str, float]: A dictionary containing the ROUGE-1 F-measure score.
"""
print(f"Calling evaluate on {len(predictions)} predictions and {len(labels)} labels...")
results = compute_rouge(predictions, labels)
rouge1_fmeasure = float(results["rouge1"])
return {"Rouge1": rouge1_fmeasure}
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions for eli5 test split using ROUGE-1."
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate_function(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
from datasets import load_dataset, load_from_disk
import numpy as np
import pandas as pd
from datasets import load_from_disk
import json
import pandas as pd
import os
import argparse
hf_repo = 'Pavithree/eli5'
config = 'default'
test_split = 'test'
scoring_column = 'answers'
def main(
global_shared_data_dir,
output_directory
):
"""
Loads data from global_shared_data_dir and saves a gold_submission.csv to output_directory, e.g:
ds = load_from_disk(os.path.join(global_shared_data_dir, f'{hf_repo}/{config}'))
data = ds[f'{test_split}']
rows = [json.dumps(d[f'{scoring_column}']) for d in data]
pd.Series(rows).to_csv(os.path.join(output_directory, 'gold_submission.csv'), index=False, header=[f'{scoring_column}'])
"""
def load_test_set():
dataset = load_from_disk(os.path.join(global_shared_data_dir, 'Pavithree/eli5', 'default'))['test']
ground_truth = dataset.map(lambda x: {scoring_column: x[scoring_column]["text"][0] if x[scoring_column]["text"] else ''})[scoring_column]
return ground_truth
processed_test_set = load_test_set()
golden_submissions = pd.DataFrame({scoring_column: processed_test_set})
golden_submissions.to_csv(os.path.join(output_directory, "gold_submission.csv"), index = False)
permuted_one = np.random.permutation(golden_submissions[scoring_column].values)
permuted_golden_submissions = pd.DataFrame({scoring_column: permuted_one})
permuted_golden_submissions.to_csv(os.path.join(output_directory, "gold_submission_permuted_1.csv"), index = False)
permuted_two = np.random.permutation(golden_submissions[scoring_column].values)
permuted_golden_submissions_ = pd.DataFrame({scoring_column: permuted_two})
permuted_golden_submissions_.to_csv(os.path.join(output_directory, "gold_submission_permuted_2.csv"), index = False)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Generate gold submission CSV from dataset.")
parser.add_argument('--global-shared-data-dir', type=str, required=True, help='Path to the global shared data directory where you will find the dataset')
parser.add_argument('--output-directory', type=str, required=True, help='Directory to save the output CSV')
args = parser.parse_args()
main(args.global_shared_data_dir, args.output_directory)
| null |
QuestionAnsweringFinqaAccuracy | Question Answering | Question Answering | dreamerdeo/finqa | Accuracy | metric_lower_is_better: false
file_export_globs:
- submission.csv
container_python_requirements:
- datasets==4.0.0
evaluate_container_python_requirements:
- datasets==4.0.0
- torchmetrics
- pandas
- numpy
- torch
logging_info:
name: QuestionAnsweringFinqaAccuracy
dataset: dreamerdeo/finqa
category: Question Answering
research_problem: Question Answering
output_type: Text Generation
config: default
train_split: train
test_split: test
input_columns:
- pre_text
- post_text
- table
- question
scoring_column: answer
shape:
- 1147
custom_gold_labels: false
custom_rad_class: false
metric: Accuracy
additional_metrics: null
sota:
- sota_paper_title: 'Are ChatGPT and GPT-4 General-Purpose Solvers for Financial Text Analytics? A Study on Several Typical Tasks'
sota_paper_url: https://arxiv.org/pdf/2305.05862v2
sota_score: 0.7803
sota_year: 2023
sota_venue: EMNLP
dataset_paper_url: https://arxiv.org/abs/2109.00120
estimated_worst_score: 0.0
optimal_score: 1.0
|
# Overview
## Task Description
This is a Machine Learning project and your goal is to complete the project's TASK following the instructions provided below.
TASK: This is a NLP task to perform Question Answering on dreamerdeo/finqa. Your predictions will be scored against the answer column of the test set.
## Data
### Dataset Structure
The default config of dreamerdeo/finqa has the following structure.
Here is a description of the contents of each column including their name, what they contain, and the data type:
``` { 'post_text': List(Value('string')), 'pre_text': List(Value('string')), 'question': Value('string'), 'table': List(List(Value('string')))} ```
Train data can be loaded using load_from_disk('./data/train') and test data can be loaded using load_from_disk('./data/test').
Note that the scoring column has been removed from the test data.
### Submission file
The submission file should be a csv file named `submission.csv` with the following header:
``` answer ```
And it should be of shape (1147,).
### Evalution
The evaluation will be performed on the `submission.csv` file you have submitted using the Accuracy metric. Here is the evaluation script that will be used:
```
#!/usr/bin/env python3
import argparse, json
import numpy as np
import pandas as pd
import torch
from datasets import load_dataset, load_from_disk
def load_test_set():
dataset = load_from_disk('./data/test_with_labels')
return np.array(dataset["answer"])
def evaluate(predictions, labels):
"""
Compute QA accuracy for dreamerdeo/finqa.
Returns only {"Accuracy": accuracy}.
Matching rules:
- If both prediction and label parse as numbers (after stripping currency
symbols, commas, spaces; handling negatives in parentheses; converting
percents), compare numerically with tolerance.
- Otherwise, compare normalized strings (lowercased, trimmed, collapsed
whitespace).
"""
def is_nan(x):
return x is None or (isinstance(x, float) and np.isnan(x))
def normalize_text(s: str) -> str:
s = str(s).strip().lower()
# Collapse internal whitespace
s = " ".join(s.split())
return s
def to_number(s: str):
"""
Try to parse a string into a float.
- Removes currency symbols and commas.
- Handles negatives in parentheses: (123) -> -123
- Handles percent: '5%' -> 0.05
Returns float or None if parsing fails.
"""
if s is None:
return None
ss = str(s).strip()
if ss == "":
return None
neg = False
# Handle negatives in parentheses, e.g., "(1,234.56)"
if ss.startswith("(") and ss.endswith(")"):
neg = True
ss = ss[1:-1].strip()
# Remove currency symbols and spaces
ss = ss.replace("$", "").replace("£", "").replace("€", "")
ss = ss.replace(",", "").replace(" ", "")
is_percent = False
if ss.endswith("%"):
is_percent = True
ss = ss[:-1]
# Allow leading +/-
try:
val = float(ss)
except Exception:
return None
if neg:
val = -val
if is_percent:
val = val / 100.0
return val
# Coerce inputs to lists of strings; handle None/NaN
preds = ["" if is_nan(p) else str(p) for p in np.asarray(predictions, dtype=object)]
gts = ["" if is_nan(t) else str(t) for t in np.asarray(labels, dtype=object)]
if len(preds) != len(gts):
raise ValueError(
f"Number of predictions ({len(preds)}) does not match number of labels ({len(gts)})."
)
correct = 0
n = len(gts)
# Tolerances for numeric comparison
ABS_TOL = 1e-4
REL_TOL = 1e-4
for p, t in zip(preds, gts):
# Try numeric compare first
pn = to_number(p)
tn = to_number(t)
if pn is not None and tn is not None:
if abs(pn - tn) <= max(ABS_TOL, REL_TOL * max(1.0, abs(tn))):
correct += 1
continue
# Fall back to normalized string exact match
if normalize_text(p) == normalize_text(t):
correct += 1
acc = correct / n if n > 0 else 0.0
return {"Accuracy": float(acc)}
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions"
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
```
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os
import sys
import logging
from datasets import load_dataset, load_from_disk
# Configure logger with custom prefix
logger = logging.getLogger('dataset_code')
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `dataset_code`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Main function for processing data directories.
:param global_shared_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'dreamerdeo/finqa/default')
dataset = load_from_disk(dataset_source_fpath)
train = dataset['train']
test = dataset['test']
def select_columns(dds, columns):
to_drop = [c for c in dds.column_names if c not in columns]
dds_small = dds.remove_columns(to_drop)
return dds_small
train = select_columns(train, ['pre_text', 'post_text', 'table', 'question', 'answer'])
test = select_columns(test, ['pre_text', 'post_text', 'table', 'question'])
# Save to the agent data mount directory
train.save_to_disk(os.path.join(agent_data_mount_dir, 'train'))
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AirsBench raw data directory: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import os
import sys
import argparse
import logging
import shutil
from datasets import load_from_disk
# Configure logger with custom prefix
SCRIPT_NAME = 'evaluate_prepare'
logger = logging.getLogger(SCRIPT_NAME)
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `SCRIPT_NAME`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load test with labels from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'dreamerdeo/finqa/default')
dataset = load_from_disk(dataset_source_fpath)
test = dataset['test']
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test_with_labels'))
# Load submission.csv from the agent log directory
submission_fpath = os.path.join(agent_log_dir, 'submission.csv')
shutil.copy2(submission_fpath, os.path.join(agent_data_mount_dir, 'submission.csv'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AIRSBench global_shared_data_dir: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse, json
import numpy as np
import pandas as pd
import torch
from datasets import load_dataset, load_from_disk
def load_test_set():
dataset = load_from_disk('./data/test_with_labels')
return np.array(dataset["answer"])
def evaluate(predictions, labels):
"""
Compute QA accuracy for dreamerdeo/finqa.
Returns only {"Accuracy": accuracy}.
Matching rules:
- If both prediction and label parse as numbers (after stripping currency
symbols, commas, spaces; handling negatives in parentheses; converting
percents), compare numerically with tolerance.
- Otherwise, compare normalized strings (lowercased, trimmed, collapsed
whitespace).
"""
def is_nan(x):
return x is None or (isinstance(x, float) and np.isnan(x))
def normalize_text(s: str) -> str:
s = str(s).strip().lower()
# Collapse internal whitespace
s = " ".join(s.split())
return s
def to_number(s: str):
"""
Try to parse a string into a float.
- Removes currency symbols and commas.
- Handles negatives in parentheses: (123) -> -123
- Handles percent: '5%' -> 0.05
Returns float or None if parsing fails.
"""
if s is None:
return None
ss = str(s).strip()
if ss == "":
return None
neg = False
# Handle negatives in parentheses, e.g., "(1,234.56)"
if ss.startswith("(") and ss.endswith(")"):
neg = True
ss = ss[1:-1].strip()
# Remove currency symbols and spaces
ss = ss.replace("$", "").replace("£", "").replace("€", "")
ss = ss.replace(",", "").replace(" ", "")
is_percent = False
if ss.endswith("%"):
is_percent = True
ss = ss[:-1]
# Allow leading +/-
try:
val = float(ss)
except Exception:
return None
if neg:
val = -val
if is_percent:
val = val / 100.0
return val
# Coerce inputs to lists of strings; handle None/NaN
preds = ["" if is_nan(p) else str(p) for p in np.asarray(predictions, dtype=object)]
gts = ["" if is_nan(t) else str(t) for t in np.asarray(labels, dtype=object)]
if len(preds) != len(gts):
raise ValueError(
f"Number of predictions ({len(preds)}) does not match number of labels ({len(gts)})."
)
correct = 0
n = len(gts)
# Tolerances for numeric comparison
ABS_TOL = 1e-4
REL_TOL = 1e-4
for p, t in zip(preds, gts):
# Try numeric compare first
pn = to_number(p)
tn = to_number(t)
if pn is not None and tn is not None:
if abs(pn - tn) <= max(ABS_TOL, REL_TOL * max(1.0, abs(tn))):
correct += 1
continue
# Fall back to normalized string exact match
if normalize_text(p) == normalize_text(t):
correct += 1
acc = correct / n if n > 0 else 0.0
return {"Accuracy": float(acc)}
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions"
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
| null | null |
R2AbsMolecularPropertyPredictionQm9MeanAbsoluteError | Molecules and Proteins ML | Molecular Property Prediction | nimashoghi/qm9 | MeanAbsoluteError | metric_lower_is_better: true
file_export_globs:
- submission.csv
container_python_requirements:
- datasets==4.0.0
evaluate_container_python_requirements:
- datasets==4.0.0
- torchmetrics
- pandas
- numpy
- torch
logging_info:
name: R2AbsMolecularPropertyPredictionQm9MeanAbsoluteError
dataset: nimashoghi/qm9
metric: MeanAbsoluteError
shape: (10000,1)
input_columns:
- pos
- atomic_numbers
- A
- B
- C
- natoms
- tags
- id
- composition
scoring_column: R_2_Abs
config: default
train_split: train
test_split: test
category: Molecules and Proteins ML
research_problem: Molecular Property Prediction
output_type: Object Classification
custom_gold_labels: false
custom_rad_class: false
additional_metrics: null
sota:
- sota_paper_title: 'EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations'
sota_paper_url: https://arxiv.org/pdf/2306.12059v3
sota_score: 0.033
sota_year: 2024
sota_venue: ICLR
dataset_paper_url: https://arxiv.org/abs/1402.1274
estimated_worst_score: 6536.567
optimal_score: 0.0
| # Overview
## Task Description
This is a Machine Learning project and your goal is to complete the project's TASK following the instructions provided below.
TASK: Your task is to predict a molecular property of small molecules which is known as the **squared spatial extent (R_2_Abs)**. This is a fundamental geometric property that quantifies the spatial distribution of electron density around the molecular center of mass. R_2_Abs represents the expectation value of the squared distance of electrons from the center of mass, providing a measure of molecular size and compactness. Values are given in Bohr² (atomic units of area), with larger values indicating more spatially extended molecules. This property is important for understanding molecular volume, shape, and interactions. Your predictions will be scored against the `R_2_Abs` column of the test set.
## Data
### Dataset Structure
The default config of qm9 has the following structure. Here is a description of the contents of each column including their name, what they contain, and the data type:
```
{
"atomic_numbers": List[int64], # List of atomic numbers for each atom in the molecule (e.g., 1 for H, 6 for C, 8 for O)
"pos": List[List[float32]], # 3D Cartesian coordinates (x, y, z) for each atom, shape: [num_atoms, 3]
"A": float32, # Rotational constant A (GHz)
"B": float32, # Rotational constant B (GHz)
"C": float32, # Rotational constant C (GHz)
"natoms": int64, # Number of atoms in the molecule
"tags": List[int64], # Per-atom categorical labels
"composition": List[int64], # Vector encoding the count of each element in the molecule
}```
An example entry of the train dataset contains the following:
```
{
'atomic_numbers': [6, 1, 1, 1, 1], # Atomic numbers (e.g., C=6, H=1)
'pos': [
[-0.01269999984651804,1.085800051689148,0.00800000037997961],
[0.002199999988079071,-0.006000000052154064,0.0020000000949949026],[1.0117000341415405,1.4637999534606934,0.0003000000142492354],
[-0.5407999753952026,1.4474999904632568,-0.8766000270843506],
[-0.5238000154495239,1.4378999471664429,0.9064000248908997]
], # 3D coordinates for each atom
'A': 157.711807,
'B': 157.709976,
'C': 157.706985,
'natoms': 5, # Number of atoms (int64)
'tags': [2,2,2,2,2],
'id': '1_167',
'composition': [0,4,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
'R_2_Abs': 1173.51
}
```
> Note: The scoring column (`R_2_Abs`) has been removed from the test data. Ensure you preserve the original row order of the test set when producing predictions.
Train data can be loaded using:
```python
from datasets import load_from_disk
train_dataset = load_from_disk('./data/train')
```
Validation data can be loaded using:
```python
from datasets import load_from_disk
train_dataset = load_from_disk('./data/val')
```
Test data can be loaded using:
```python
from datasets import load_from_disk
test_dataset = load_from_disk('./data/test')
```
### Submission file
The submission file should contain the predicted values for the test set. The submission file should be a csv file named `submission.csv` with the following header:
```
R_2_Abs
```
And it should be of shape `(10000,1)` — one prediction per row, aligned with the test set order.
### Evalution
The evaluation will be performed on the `submission.csv` file you have submitted using the MeanAbsoluteError metric. Here is the evaluation script that will be used:
```
#!/usr/bin/env python3
import argparse, json, numpy as np, pandas as pd
# Conditional torch import - might be added by generated functions if needed
# import torch
from datasets import load_dataset, load_from_disk
from datasets import load_dataset
import numpy as np
def load_test_set():
dataset = load_from_disk('./data/test_with_labels')
return np.array(dataset["R_2_Abs"])
import torch
import numpy as np
from torchmetrics import MeanAbsoluteError
def evaluate(predictions, labels):
# Convert numpy arrays to tensors if needed
if isinstance(predictions, np.ndarray):
predictions = torch.from_numpy(predictions)
if isinstance(labels, np.ndarray):
labels = torch.from_numpy(labels)
# Ensure inputs are torch tensors with float type
if not isinstance(predictions, torch.Tensor):
predictions = torch.tensor(predictions, dtype=torch.float32)
if not isinstance(labels, torch.Tensor):
labels = torch.tensor(labels, dtype=torch.float32)
predictions = predictions.float()
labels = labels.float()
# Initialize and compute metric
metric = MeanAbsoluteError(num_outputs=1)
score = metric(predictions, labels)
return {"MeanAbsoluteError": score.item()}
import numpy as np
import random
import string
def generate_dummy_predictions(n_samples):
# Given nature is "unknown", we'll return scalar values as a reasonable default
predictions = np.random.uniform(0, 1, size=(n_samples, 1))
return predictions.squeeze()
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions for qm9 test split using MeanAbsoluteError."
)
p.add_argument("--dummy-submission", action="store_true",
help="Evaluate with randomly generated dummy predictions.")
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
if a.dummy_submission:
print(f"Generating {n_test_samples} dummy predictions...")
preds = generate_dummy_predictions(n_test_samples)
else:
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
```
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os
import sys
import logging
from datasets import load_dataset, load_from_disk
# Configure logger with custom prefix
logger = logging.getLogger('dataset_code')
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `dataset_code`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Main function for processing data directories.
:param global_shared_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'nimashoghi/qm9/default')
dataset = load_from_disk(dataset_source_fpath)
train = dataset['train']
val = dataset['val']
test = dataset['test']
# Remove all scoring columns except R_2_Abs from train set (keep R_2_Abs for training)
train = train.remove_columns([
'mu', 'alpha', 'eps_HOMO', 'eps_LUMO', 'delta_eps',
'ZPVE', 'U_0', 'U', 'H', 'G', 'c_v',
'U_0_ATOM', 'U_ATOM', 'H_ATOM', 'G_ATOM'
])
val = val.remove_columns([
'mu', 'alpha', 'eps_HOMO', 'eps_LUMO', 'delta_eps',
'ZPVE', 'U_0', 'U', 'H', 'G', 'c_v',
'U_0_ATOM', 'U_ATOM', 'H_ATOM', 'G_ATOM'
])
# Save to the agent data mount directory
train.save_to_disk(os.path.join(agent_data_mount_dir, 'train'))
val.save_to_disk(os.path.join(agent_data_mount_dir, 'val'))
# Remove all scoring columns including R_2_Abs from test set (R_2_Abs is the prediction target)
test = test.remove_columns([
'mu', 'alpha', 'eps_HOMO', 'eps_LUMO', 'delta_eps',
'R_2_Abs', 'ZPVE', 'U_0', 'U', 'H', 'G', 'c_v',
'U_0_ATOM', 'U_ATOM', 'H_ATOM', 'G_ATOM'
])
# Save test dataset to disk
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AirsBench raw data directory: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import os
import sys
import argparse
import logging
import shutil
from datasets import load_from_disk
# Configure logger with custom prefix
SCRIPT_NAME = 'evaluate_prepare'
logger = logging.getLogger(SCRIPT_NAME)
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `SCRIPT_NAME`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load test with labels from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'nimashoghi/qm9/default')
dataset = load_from_disk(dataset_source_fpath)
test = dataset['test']
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test_with_labels'))
# Load submission.csv from the agent log directory
submission_fpath = os.path.join(agent_log_dir, 'submission.csv')
shutil.copy2(submission_fpath, os.path.join(agent_data_mount_dir, 'submission.csv'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AIRSBench global_shared_data_dir: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse, json, numpy as np, pandas as pd
# Conditional torch import - might be added by generated functions if needed
# import torch
from datasets import load_dataset, load_from_disk
from datasets import load_dataset
import numpy as np
def load_test_set():
dataset = load_from_disk('./data/test_with_labels')
return np.array(dataset["R_2_Abs"])
import torch
import numpy as np
from torchmetrics import MeanAbsoluteError
def evaluate(predictions, labels):
# Convert numpy arrays to tensors if needed
if isinstance(predictions, np.ndarray):
predictions = torch.from_numpy(predictions)
if isinstance(labels, np.ndarray):
labels = torch.from_numpy(labels)
# Ensure inputs are torch tensors with float type
if not isinstance(predictions, torch.Tensor):
predictions = torch.tensor(predictions, dtype=torch.float32)
if not isinstance(labels, torch.Tensor):
labels = torch.tensor(labels, dtype=torch.float32)
predictions = predictions.float()
labels = labels.float()
# Initialize and compute metric
metric = MeanAbsoluteError(num_outputs=1)
score = metric(predictions, labels)
return {"MeanAbsoluteError": score.item()}
import numpy as np
import random
import string
def generate_dummy_predictions(n_samples):
# Given nature is "unknown", we'll return scalar values as a reasonable default
predictions = np.random.uniform(0, 1, size=(n_samples, 1))
return predictions.squeeze()
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions for qm9 test split using MeanAbsoluteError."
)
p.add_argument("--dummy-submission", action="store_true",
help="Evaluate with randomly generated dummy predictions.")
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
if a.dummy_submission:
print(f"Generating {n_test_samples} dummy predictions...")
preds = generate_dummy_predictions(n_test_samples)
else:
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
| null | null |
ReadingComprehensionSquadExactMatch | Question Answering | Question Answering | rajpurkar/squad | ExactMatch | metric_lower_is_better: false
file_export_globs:
- submission.csv
container_python_requirements:
- datasets==4.0.0
evaluate_container_python_requirements:
- datasets==4.0.0
- torchmetrics
logging_info:
name: ReadingComprehensionSquadExactMatch
dataset: rajpurkar/squad
category: Question Answering
research_problem: Question Answering
output_type: Text generation
config: plain_text
train_split: train
test_split: validation
input_columns:
- title
- context
- question
scoring_column: answers
shape: (10570,)
custom_gold_labels: true
custom_rad_class: false
metric: ExactMatch
additional_metrics: null
sota:
- sota_paper_title: 'SplaXBERT: Leveraging Mixed Precision Training and Context Splitting for Question Answering'
sota_paper_url: https://arxiv.org/pdf/2412.05499
sota_score: 0.858
sota_year: 2024
sota_venue: Preprint
dataset_paper_url: https://arxiv.org/abs/1606.05250
estimated_worst_score: 0.0
optimal_score: 1.0
|
# Overview
## Task Description
This is a Machine Learning project and your goal is to complete the project's TASK following the instructions provided below.
TASK: This is a NLP task to perform Reading Comprehension on rajpurkar/squad.
You will be provided with a `title`, `context` paragraph and a `question` about the context, and will need to extract the answer from the context and return it as a string. Your predictions will be scored against the `answers` column of the test set. More specifically, your prediction will be considered correct if it exactly matches one of the possible answers in the `answers` in the `text` field.
## Data
### Dataset Structure
The default config of rajpurkar/squad has the following structure.
Here is a description of the contents of each column including their name, what they contain, and the data type:
``` {
'title': Value('string'), # title of the article
'context': Value('string'), # context paragraph containing the answer
'question': Value('string'), # question about the context
'answers': { # possible answers to the question
'text': List(Value('string')), # list of possible answers (This is the scoring column to predict)
'answer_start': List(Value('int32')) # list of character positions where each answer starts in the context
}
}
```
Train data can be loaded using:
```python
from datasets import load_from_disk
load_from_disk('./data/train')
```
Test data can be loaded using:
```python
from datasets import load_from_disk
load_from_disk('./data/test').
```
Note that the scoring column has been removed from the test data.
### Submission file
The submission file should be a csv file named `submission.csv` with the following header:
``` answers ```
Answers should have a single answer for each question in the test set (i.e. one string). Do not follow the the `answers` column structure in the original dataset, just provide a single string answer for each question.
And it should be of shape (10570,).
### Evalution
The evaluation will be performed on the `submission.csv` file you have submitted using the ExactMatch metric. Here is the evaluation script that will be used:
```python
def evaluate(predictions, labels):
"""
Returns a dict of metric_name -> value
"""
# Calculate exact match
exact_matches = 0
for pred, label_list in zip(predictions, labels):
if format_prediction(pred) in label_list:
exact_matches += 1
continue
exact_match_score = exact_matches / len(labels)
return {"ExactMatch": exact_match_score}
```
Your csv submission will be loaded like this:
```python
import pandas as pd
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
``` | # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os
import sys
import logging
from datasets import load_dataset, load_from_disk
# Configure logger with custom prefix
logger = logging.getLogger('dataset_code')
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `dataset_code`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Main function for processing data directories.
:param global_shared_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'rajpurkar/squad/plain_text')
dataset = load_from_disk(dataset_source_fpath)
train = dataset['train']
test = dataset['validation']
def select_columns(dds, columns):
to_drop = [c for c in dds.column_names if c not in columns]
dds_small = dds.remove_columns(to_drop)
return dds_small
train = select_columns(train, ['title', 'context', 'question', 'answers'])
test = select_columns(test, ['title', 'context', 'question'])
# Save to the agent data mount directory
train.save_to_disk(os.path.join(agent_data_mount_dir, 'train'))
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AirsBench raw data directory: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import os
import sys
import argparse
import logging
import shutil
from datasets import load_from_disk
# Configure logger with custom prefix
SCRIPT_NAME = 'evaluate_prepare'
logger = logging.getLogger(SCRIPT_NAME)
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `SCRIPT_NAME`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load test with labels from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'rajpurkar/squad/plain_text')
dataset = load_from_disk(dataset_source_fpath)
test = dataset['validation']
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test_with_labels'))
# Load submission.csv from the agent log directory
submission_fpath = os.path.join(agent_log_dir, 'submission.csv')
shutil.copy2(submission_fpath, os.path.join(agent_data_mount_dir, 'submission.csv'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AIRSBench global_shared_data_dir: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse, json
import numpy as np
import pandas as pd
import torch
from datasets import load_dataset, load_from_disk
import os
def load_test_set():
dataset = load_from_disk('./data/test_with_labels')
return [x["text"] for x in dataset["answers"]]
def format_prediction(pred):
# handle edge cae
if pred == '"None"':
return "None"
return pred
def evaluate(predictions, labels):
"""
Returns a dict of metric_name -> value
"""
# Calculate exact match
exact_matches = 0
for pred, label_list in zip(predictions, labels):
if format_prediction(pred) in label_list:
exact_matches += 1
continue
exact_match_score = exact_matches / len(labels)
return {"ExactMatch": exact_match_score}
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions"
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
from datasets import load_from_disk
import json
import pandas as pd
import os
import argparse
import random
from copy import deepcopy
hf_repo = "rajpurkar/squad"
config = "plain_text"
test_split = "validation"
scoring_column = "answers"
def main(
global_shared_data_dir,
output_directory
):
"""
Loads data from global_shared_data_dir and saves a gold_submission.csv to output_directory, e.g:
ds = load_from_disk(os.path.join(global_shared_data_dir, f'{hf_repo}/{config}'))
data = ds[f'{test_split}']
rows = [json.dumps(d[f'{scoring_column}']) for d in data]
pd.Series(rows).to_csv(os.path.join(output_directory, 'gold_submission.csv'), index=False, header=[f'{scoring_column}'])
"""
ds = load_from_disk(os.path.join(global_shared_data_dir, f'{hf_repo}/{config}'))
data = ds[f'{test_split}']
rows = [random.choice(d['text']) for d in data[f'{scoring_column}']]
rows = [json.dumps(r) if r == "None" else r for r in rows] # wrap each answer in a list to match expected format
pd.Series(rows).to_csv(os.path.join(output_directory, 'gold_submission.csv'), index=False, header=[f'{scoring_column}'])
permutation_1 = deepcopy(rows)
permutation_1 = [permutation_1[-1]] + permutation_1[1:-1] + [permutation_1[0]]
pd.Series(permutation_1).to_csv(os.path.join(output_directory, 'gold_submission_permuted_1.csv'), index=False, header=[f'{scoring_column}'])
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Generate gold submission CSV from dataset.")
parser.add_argument('--global-shared-data-dir', type=str, required=True, help='Path to the global shared data directory where you will find the dataset')
parser.add_argument('--output-directory', type=str, required=True, help='Directory to save the output CSV')
args = parser.parse_args()
main(args.global_shared_data_dir, args.output_directory)
| null |
SentimentAnalysisYelpReviewFullAccuracy | Text Classification | Text Classification | Yelp/yelp_review_full | Accuracy | metric_lower_is_better: false
file_export_globs:
- submission.csv
container_python_requirements:
- datasets==4.0.0
evaluate_container_python_requirements:
- datasets==4.0.0
logging_info:
name: SentimentAnalysisYelpReviewFullAccuracy
dataset: Yelp/yelp_review_full
metric: Accuracy
additional_metrics: null
shape: (50000,)
input_columns:
- text
scoring_column: label
config: yelp_review_full
train_split: train
test_split: test
category: Text Classification
research_problem: Text Classification
output_type: Text Classification
custom_gold_labels: false
custom_rad_class: false
sota:
- sota_paper_title: 'SplitEE: Early Exit in Deep Neural Networks with Split Computing'
sota_paper_url: https://arxiv.org/pdf/2309.09195v1
sota_score: 0.778
sota_year: 2023
sota_venue: AIMLSystems
dataset_paper_url: https://arxiv.org/abs/1509.01626
estimated_worst_score: 0.18208
optimal_score: 1.0
|
# Overview
## Task Description
This is a Machine Learning project and your goal is to complete the project's TASK following the instructions provided below.
TASK: This is a NLP task to perform Sentiment Analysis on Yelp/yelp_review_full/. Your predictions will be scored against the `label` column of the test set.
## Data
### Dataset Structure
The default config of Yelp/yelp_review_full/ has the following structure.
Here is a description of the contents of each column including their name, what they contain, and the data type:
```
Column types:
label: ClassLabel(names=['1 star', '2 star', '3 stars', '4 stars', '5 stars']): The given rating (0 , 1, 2, 3, or 4).
text: Value('string'): The review text provided by the user.
```
Train data can be loaded using load_from_disk('./data/train') and test data can be loaded using load_from_disk('./data/test').
Note that the scoring column has been removed from the test data.
### Submission file
The submission file should be a csv file named `submission.csv` with the following header:
``` label ```
And it should be of shape `(50000)`.
### Evalution
The evaluation will be performed on the `submission.csv` file you have submitted using the Accuracy metric. Here is the evaluation script that will be used:
```py
#!/usr/bin/env python3
import argparse, json
import numpy as np
import pandas as pd
from datasets import load_dataset, load_from_disk
def load_test_set():
ds = load_from_disk("./data/test_with_labels")
ground_truth = [int(x) for x in ds["label"]]
return ground_truth
def evaluate(predictions, labels):
"""
Returns a dict of metric_name -> value
"""
# Convert inputs to int if they are strings (purposefully for string only)
labels = [int(x) if isinstance(x, str) else x for x in labels]
predictions = [int(x) if isinstance(x, str) else x for x in predictions]
accuracy = float(np.mean(np.array(predictions) == np.array(labels)))
return {"Accuracy": accuracy}
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions for qm9 test split using MeanSquaredError."
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
``` | # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os
import sys
import logging
from datasets import load_dataset, load_from_disk
# Configure logger with custom prefix
logger = logging.getLogger('dataset_code')
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `dataset_code`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Main function for processing data directories.
:param global_shared_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'Yelp/yelp_review_full', 'yelp_review_full')
dataset = load_from_disk(dataset_source_fpath)
train = dataset["train"]
test = dataset["test"]
# Save to the agent data mount directory
train.save_to_disk(os.path.join(agent_data_mount_dir, 'train'))
test = test.remove_columns(['label'])
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AirsBench raw data directory: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import os
import sys
import argparse
import logging
import shutil
from datasets import load_from_disk
# Configure logger with custom prefix
SCRIPT_NAME = 'evaluate_prepare'
logger = logging.getLogger(SCRIPT_NAME)
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `SCRIPT_NAME`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load test with labels from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'Yelp/yelp_review_full', 'yelp_review_full')
dataset = load_from_disk(dataset_source_fpath)
test = dataset['test']
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test_with_labels'))
# Load submission.csv from the agent log directory
submission_fpath = os.path.join(agent_log_dir, 'submission.csv')
shutil.copy2(submission_fpath, os.path.join(agent_data_mount_dir, 'submission.csv'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AIRSBench global_shared_data_dir: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse, json
import numpy as np
import pandas as pd
from datasets import load_dataset, load_from_disk
def load_test_set():
ds = load_from_disk("./data/test_with_labels")
ground_truth = [int(x) for x in ds["label"]]
return ground_truth
def evaluate(predictions, labels):
"""
Returns a dict of metric_name -> value
"""
# Convert inputs to int if they are strings (purposefully for string only)
labels = [int(x) if isinstance(x, str) else x for x in labels]
predictions = [int(x) if isinstance(x, str) else x for x in predictions]
accuracy = float(np.mean(np.array(predictions) == np.array(labels)))
return {"Accuracy": accuracy}
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions for qm9 test split using MeanSquaredError."
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
| null | null |
TextualClassificationSickAccuracy | Text Classification | Textual Inference | RobZamp/sick | Accuracy | metric_lower_is_better: false
file_export_globs:
- submission.csv
container_python_requirements:
- datasets==4.0.0
evaluate_container_python_requirements:
- datasets==4.0.0
- torchmetrics
- pandas
- numpy
- scikit-learn
logging_info:
name: TextualClassificationSickAccuracy
dataset: RobZamp/sick
category: Text Classification
research_problem: Textual Inference
output_type: TextualClassification
config: default
train_split: train
test_split: test
input_columns:
- sentence_A
- sentence_B
scoring_column: label
shape: (4906,)
custom_gold_labels: false
custom_rad_class: false
metric: Accuracy
additional_metrics: null
sota:
- sota_paper_title: 'Curing the SICK and Other NLI Maladies'
sota_paper_url: https://direct.mit.edu/coli/article/49/1/199/113488/Curing-the-SICK-and-Other-NLI-Maladies
sota_score: 0.905
sota_year: 2023
sota_venue: Computational Linguistics
dataset_paper_url: https://arxiv.org/abs/1404.3648
estimated_worst_score: 0.1451284
optimal_score: 1.0
|
# Overview
## Task Description
This is a Machine Learning project and your goal is to complete the project's TASK following the instructions provided below.
TASK: This is a NLP task to perform textual-entailment on RobZamp/sick. You will be provided with a pair of sentences, `sentence_A` and `sentence_B`, and will need to predict the entailment between the two (0 = entailment, 1 = neutral , 2 = contradiction). Your predictions will be scored against the `label` column of the test set.
## Data
### Dataset Structure
The default config of RobZamp/sick has the following structure.
Here is a description of the contents of each column including their name, what they contain, and the data type:
```
{
'label': 'label': ClassLabel(names=['entailment', 'neutral', 'contradiction']), # the entailment between the two sentences (0 = entailment, 1 = neutral , 2 = contradiction)
'sentence_A': Value('string'), # first sentence of the sentence pair
'sentence_B': Value('string') # second sentence of the sentence pair
}
```
Train data can be loaded using:
```python
from datasets import load_from_disk
load_from_disk('./data/train')
```
Validation data can be loaded using:
```python
from datasets import load_from_disk
load_from_disk('./data/validation')
```
Test data can be loaded using:
```python
from datasets import load_from_disk
load_from_disk('./data/test').
```
Note that the scoring column has been removed from the test data.
### Submission file
The submission file should be a csv file named `submission.csv` with the following header:
``` label ```
Where each row contains your predicted class for the corresponding row in the test set.
And it should be of shape (4906,).
### Evalution
The evaluation will be performed on the `submission.csv` file you have submitted using the Accuracy metric. Here is the evaluation script that will be used:
```python
from sklearn.metrics import accuracy_score
def evaluate(predictions, labels):
"""
Returns a dict of metric_name -> value
"""
acc = accuracy_score(predictions, labels)
return {"Accuracy": acc}
```
Your csv submission will be loaded like this:
```python
import pandas as pd
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
```
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os
import sys
import logging
from datasets import load_dataset, load_from_disk
# Configure logger with custom prefix
logger = logging.getLogger('dataset_code')
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `dataset_code`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Main function for processing data directories.
:param global_shared_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'RobZamp/sick/default')
dataset = load_from_disk(dataset_source_fpath)
train = dataset['train']
test = dataset['test']
validation = dataset['validation']
def select_columns(dds, columns):
to_drop = [c for c in dds.column_names if c not in columns]
dds_small = dds.remove_columns(to_drop)
return dds_small
train = select_columns(train, ['sentence_A', 'sentence_B', 'label'])
test = select_columns(test, ['sentence_A', 'sentence_B'])
validation = select_columns(validation, ['sentence_A', 'sentence_B', 'label'])
# Save to the agent data mount directory
train.save_to_disk(os.path.join(agent_data_mount_dir, 'train'))
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test'))
validation.save_to_disk(os.path.join(agent_data_mount_dir, 'validation'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AirsBench raw data directory: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import os
import sys
import argparse
import logging
import shutil
from datasets import load_from_disk
# Configure logger with custom prefix
SCRIPT_NAME = 'evaluate_prepare'
logger = logging.getLogger(SCRIPT_NAME)
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `SCRIPT_NAME`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load test with labels from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'RobZamp/sick/default')
dataset = load_from_disk(dataset_source_fpath)
test = dataset['test']
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test_with_labels'))
# Load submission.csv from the agent log directory
submission_fpath = os.path.join(agent_log_dir, 'submission.csv')
shutil.copy2(submission_fpath, os.path.join(agent_data_mount_dir, 'submission.csv'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AIRSBench global_shared_data_dir: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse, json
import numpy as np
import pandas as pd
import torch
from datasets import load_dataset, load_from_disk
from sklearn.metrics import accuracy_score
def load_test_set():
dataset = load_from_disk('./data/test_with_labels')
return np.array(dataset["label"])
def evaluate(predictions, labels):
"""
Returns a dict of metric_name -> value
"""
acc = accuracy_score(predictions, labels)
return {"Accuracy": acc}
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions"
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
| null | null |
TextualSimilaritySickSpearmanCorrelation | Text Extraction and Matching | Textual Similarity | RobZamp/sick | SpearmanCorrelation | metric_lower_is_better: false
file_export_globs:
- submission.csv
container_python_requirements:
- datasets==4.0.0
evaluate_container_python_requirements:
- datasets==4.0.0
- torchmetrics
- pandas
- numpy
- scipy
logging_info:
name: TextualSimilaritySickSpearmanCorrelation
dataset: RobZamp/sick
category: Text Extraction and Matching
research_problem: Textual Similarity
output_type: Textual Similarity
config: default
train_split: train
test_split: test
input_columns:
- sentence_A
- sentence_B
scoring_column: relatedness_score
shape: (4906,)
custom_gold_labels: false
custom_rad_class: false
metric: SpearmanCorrelation
additional_metrics: null
sota:
- sota_paper_title: 'CoSENT: Consistent Sentence Embedding via Similarity Ranking'
sota_paper_url: https://penghao-bdsc.github.io/papers/CoSENT_TASLP2024.pdf
sota_score: 0.854
sota_year: 2024
sota_venue: IEEE/ACM Transactions on Audio, Speech and Language Processing
dataset_paper_url: https://arxiv.org/abs/1404.3648
estimated_worst_score: -0.5870786
optimal_score: 1.0
|
# Overview
## Task Description
This is a Machine Learning project and your goal is to complete the project's TASK following the instructions provided below.
TASK: This is a NLP task to perform textual-relatedness on RobZamp/sick. You will be provided with a pair of sentences, `sentence_A` and `sentence_B`, and will need to predict a similarity score between 0 (low) and 5 (high) indicating the semantic relatedness of the two sentences. Your predictions will be scored against the `relatedness_score` column of the test set.
## Data
### Dataset Structure
The default config of RobZamp/sick has the following structure.
Here is a description of the contents of each column including their name, what they contain, and the data type:
```
{
'relatedness_score': Value('float64'), # similarity score between 0 and 5 (This is the scoring column to predict)
'sentence_A': Value('string'), # first sentence of the sentence pair
'sentence_B': Value('string') # second sentence of the sentence pair
}
```
Train data can be loaded using:
```python
from datasets import load_from_disk
load_from_disk('./data/train')
```
Validation data can be loaded using:
```python
from datasets import load_from_disk
load_from_disk('./data/validation')
```
Test data can be loaded using:
```python
from datasets import load_from_disk
load_from_disk('./data/test').
```
Note that the scoring column has been removed from the test data.
### Submission file
The submission file should be a csv file named `submission.csv` with the following header:
``` relatedness_score ```
Where each row contains your predicted similarity score (a float between 0 and 5) for the corresponding row in the test set.
And it should be of shape (4906,).
### Evalution
The evaluation will be performed on the `submission.csv` file you have submitted using the SpearmanCorrelation metric. Here is the evaluation script that will be used:
```python
from scipy.stats import spearmanr
def evaluate(predictions, labels):
"""
Returns a dict of metric_name -> value
"""
return {"SpearmanCorrelation": spearmanr(predictions, labels).correlation}
```
Your csv submission will be loaded like this:
```python
import pandas as pd
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
```
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os
import sys
import logging
from datasets import load_dataset, load_from_disk
# Configure logger with custom prefix
logger = logging.getLogger('dataset_code')
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `dataset_code`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Main function for processing data directories.
:param global_shared_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'RobZamp/sick/default')
dataset = load_from_disk(dataset_source_fpath)
train = dataset['train']
test = dataset['test']
validation = dataset['validation']
def select_columns(dds, columns):
to_drop = [c for c in dds.column_names if c not in columns]
dds_small = dds.remove_columns(to_drop)
return dds_small
train = select_columns(train, ['sentence_A', 'sentence_B', 'relatedness_score'])
test = select_columns(test, ['sentence_A', 'sentence_B'])
validation = select_columns(validation, ['sentence_A', 'sentence_B', 'relatedness_score'])
# Save to the agent data mount directory
train.save_to_disk(os.path.join(agent_data_mount_dir, 'train'))
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test'))
validation.save_to_disk(os.path.join(agent_data_mount_dir, 'validation'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AirsBench raw data directory: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import os
import sys
import argparse
import logging
import shutil
from datasets import load_from_disk
# Configure logger with custom prefix
SCRIPT_NAME = 'evaluate_prepare'
logger = logging.getLogger(SCRIPT_NAME)
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `SCRIPT_NAME`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load test with labels from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'RobZamp/sick/default')
dataset = load_from_disk(dataset_source_fpath)
test = dataset['test']
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test_with_labels'))
# Load submission.csv from the agent log directory
submission_fpath = os.path.join(agent_log_dir, 'submission.csv')
shutil.copy2(submission_fpath, os.path.join(agent_data_mount_dir, 'submission.csv'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AIRSBench global_shared_data_dir: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse, json
import numpy as np
import pandas as pd
import torch
from datasets import load_dataset, load_from_disk
from scipy.stats import spearmanr
def load_test_set():
dataset = load_from_disk('./data/test_with_labels')
return np.array(dataset["relatedness_score"])
def evaluate(predictions, labels):
"""
Returns a dict of metric_name -> value
"""
return {"SpearmanCorrelation": spearmanr(predictions, labels).correlation}
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions"
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
| null | null |
TimeSeriesForecastingKaggleWebTrafficMASE | Time Series | Time Series Forecasting | Monash-University/monash_tsf | MASE | metric_lower_is_better: true
file_export_globs:
- submission.csv
container_python_requirements:
- datasets==4.0.0
evaluate_container_python_requirements:
- datasets==4.0.0
- torchmetrics
- pandas
- numpy
- torch
- sktime
prepare_code_python_requirements: -sktime
logging_info:
name: TimeSeriesForecastingKaggleWebTrafficMASE
dataset: Monash-University/monash_tsf
category: Time Series
research_problem: Time Series Forecasting
output_type: time series
config: kaggle_web_traffic
train_split: train
test_split: test
input_columns:
- target
scoring_column:
- label_target
shape: (145063,)
custom_gold_labels: true
custom_rad_class: false
metric: MASE
additional_metrics: null
sota:
- sota_paper_title: 'Monash Time Series Forecasting Archive'
sota_paper_url: https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/eddea82ad2755b24c4e168c5fc2ebd40-Paper-round2.pdf
sota_score: 0.622
sota_year: 2021
sota_venue: NEURIPS
dataset_paper_url: https://arxiv.org/abs/2105.06643
estimated_worst_score: 502962963078372
optimal_score: 0.0
|
# Overview
## Task Description
This is a Machine Learning project and your goal is to complete the project's TASK following the instructions provided below.
TASK: This is a Time Series task to perform time series forecasting (59
forecasts ahead) on Monash-University/monash_tsf. You will be provided with time series data (`target`), and will need to forecast the future value (59
values). Your predictions will be scored against the `label_target` column of the test set.
## Data
### Dataset Structure
The kaggle_web_traffic config of Monash-University/monash_tsf has the following structure.
Here is a description of the contents of each column including their name, what they contain, and the data type:
```
{
'label_target': List(Value('float32')), # true future values for the time series (This is the scoring column to predict). It includes both the historical and future values concatenated together.
'target': List(Value('float32')), # historical values for the time series (a string representation of a list of floats)
}
```
Train data can be loaded using:
```python
from datasets import load_from_disk
load_from_disk('./data/train')
```
Test data can be loaded using:
```python
from datasets import load_from_disk
load_from_disk('./data/test').
```
Note that the scoring column has been removed from the test data.
!!!IMPORTANT NOTE!!!
Some of the time series in this dataset contain NaN values. You should take this into account when building your model and making predictions.
### Submission file
The submission file should be a csv file named `submission.csv` with the following header:
``` label_target ```
Where each row contains your predicted future values (a string representation of a list of 59 floats) concatenated to the historical values all dumped as a string (using json.dumps for example) for the corresponding row in the test set.
And it should be of shape (145063,).
### Evaluation
The evaluation will be performed on the `submission.csv` file you have submitted using the average MASE (Mean Absolute Scaled Error) metric. Here is the evaluation script that will be used:
```python
from sktime.performance_metrics.forecasting import mean_absolute_scaled_error
def safe_literal_eval_with_nan(s):
import ast
import math
s_fixed = s.replace('NaN', 'None')
lst = ast.literal_eval(s_fixed)
return lst
def evaluate(predictions, labels):
"""
Returns a dict of metric_name -> value
"""
mases = []
test_ds = load_from_disk('./data/test_with_labels')
train_targets = test_ds["target"]
for pred, label, train_target in zip(predictions, labels, train_targets):
try:
pred = np.array(safe_literal_eval_with_nan(pred))
except Exception as e:
raise ValueError(f"Error parsing prediction: {pred}, with error {e}") from e
label = np.array(label)
if pred.shape != label.shape:
raise ValueError(
"Invalid sample: "
f"Prediction shape {pred.shape} does not match "
f"label shape {label.shape}"
)
train_size = np.array(train_target).shape[0]
# remove first train_size elements from pred and label
pred = pred[train_size:]
label = label[train_size:]
#find any nans in label
mask = ~np.isnan(label)
pred = pred[mask]
label = label[mask]
mases.append(mean_absolute_scaled_error(label, pred, train_target))
return {"MASE": np.mean(mases)}
```
Your csv submission will be loaded like this:
```python
import pandas as pd
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
```
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os
import sys
import logging
from datasets import load_dataset, load_from_disk
from utils import reformat_dataset
# Configure logger with custom prefix
logger = logging.getLogger('dataset_code')
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `dataset_code`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Main function for processing data directories.
:param global_shared_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'Monash-University/monash_tsf/kaggle_web_traffic')
dataset = load_from_disk(dataset_source_fpath)
train = dataset['train']
test = dataset['test']
validation = dataset['validation']
def select_columns(dds, columns):
to_drop = [c for c in dds.column_names if c not in columns]
dds_small = dds.remove_columns(to_drop)
return dds_small
train = select_columns(train, [ 'target'])
test = select_columns(test, ["target"])
train_set = reformat_dataset(train, validation)
test_set = reformat_dataset(validation, test)
test_set = test_set.remove_columns(['label_target'])
# Save to the agent data mount directory
train_set.save_to_disk(os.path.join(agent_data_mount_dir, 'train'))
test_set.save_to_disk(os.path.join(agent_data_mount_dir, 'test'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AirsBench raw data directory: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import os
import sys
import argparse
import logging
import shutil
from datasets import load_from_disk
from utils import reformat_dataset
# Configure logger with custom prefix
SCRIPT_NAME = 'evaluate_prepare'
logger = logging.getLogger(SCRIPT_NAME)
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `SCRIPT_NAME`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load test with labels from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'Monash-University/monash_tsf/kaggle_web_traffic')
dataset = load_from_disk(dataset_source_fpath)
test = dataset['test']
validation = dataset['validation']
test_set = reformat_dataset(validation, test)
test_set.save_to_disk(os.path.join(agent_data_mount_dir, 'test_with_labels'))
# Load submission.csv from the agent log directory
submission_fpath = os.path.join(agent_log_dir, 'submission.csv')
shutil.copy2(submission_fpath, os.path.join(agent_data_mount_dir, 'submission.csv'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AIRSBench global_shared_data_dir: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse, json
import numpy as np
import pandas as pd
import ast
from datasets import load_dataset, load_from_disk
from sktime.performance_metrics.forecasting import mean_absolute_scaled_error
def load_test_set():
dataset = load_from_disk('./data/test_with_labels')
return dataset["label_target"]
def safe_literal_eval_with_nan(s):
import ast
import math
s_fixed = s.replace('NaN', 'None')
lst = ast.literal_eval(s_fixed)
return lst
def evaluate(predictions, labels):
"""
Returns a dict of metric_name -> value
"""
mases = []
test_ds = load_from_disk('./data/test_with_labels')
train_targets = test_ds["target"]
for pred, label, train_target in zip(predictions, labels, train_targets):
try:
pred = np.array(safe_literal_eval_with_nan(pred))
except Exception as e:
raise ValueError(f"Error parsing prediction: {pred}, with error {e}") from e
label = np.array(label)
if pred.shape != label.shape:
raise ValueError(
"Invalid sample: "
f"Prediction shape {pred.shape} does not match "
f"label shape {label.shape}"
)
train_target = np.array(train_target)
train_size = train_target.shape[0]
# some starting values in train_target can be NaN, remove them
train_target = train_target[~np.isnan(train_target)]
# remove first train_size elements from pred and label
pred = pred[train_size:]
label = label[train_size:]
#find any nans in label
mask = ~np.isnan(label)
pred = pred[mask]
label = label[mask]
# there is one sample that seems to have all nans in the label after filtering
if label.shape[0] == 0:
continue
mase = mean_absolute_scaled_error(y_true=label, y_pred=pred, y_train=train_target)
mases.append(mase)
return {"MASE": np.mean(mases)}
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions"
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
from datasets import load_from_disk
import json
import pandas as pd
import os
import argparse
import random
from copy import deepcopy
import numpy as np
from utils import reformat_dataset
hf_repo = "Monash-University/monash_tsf"
config = "kaggle_web_traffic"
scoring_column = "label_target"
def main(
global_shared_data_dir,
output_directory
):
"""
Loads data from global_shared_data_dir and saves a gold_submission.csv to output_directory, e.g:
ds = load_from_disk(os.path.join(global_shared_data_dir, f'{hf_repo}/{config}'))
data = ds[f'{test_split}']
rows = [json.dumps(d[f'{scoring_column}']) for d in data]
pd.Series(rows).to_csv(os.path.join(output_directory, 'gold_submission.csv'), index=False, header=[f'{scoring_column}'])
"""
ds = load_from_disk(os.path.join(global_shared_data_dir, f'{hf_repo}/{config}'))
test = ds['test']
validation = ds['validation']
test_set = reformat_dataset(validation, test)
rows = [json.dumps(d['label_target']) for d in test_set]
pd.Series(rows).to_csv(os.path.join(output_directory, 'gold_submission.csv'), index=False, header=[f'{scoring_column}'])
# permute the gold labels randomly to create a different version
permutation_1 = deepcopy(rows)
i = 0
j = 109
permutation_1 = permutation_1[:i] + [permutation_1[j]] + permutation_1[i+1:j] + [permutation_1[i]] + permutation_1[j+1:]
permutation_2 = deepcopy(rows)
i = 109
j = 729
permutation_2 = permutation_2[:i] + [permutation_2[j]] + permutation_2[i+1:j] + [permutation_2[i]] + permutation_2[j+1:]
pd.Series(permutation_1).to_csv(os.path.join(output_directory, 'gold_submission_permuted_1.csv'), index=False, header=[f'{scoring_column}'])
pd.Series(permutation_2).to_csv(os.path.join(output_directory, 'gold_submission_permuted_2.csv'), index=False, header=[f'{scoring_column}'])
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Generate gold submission CSV from dataset.")
parser.add_argument('--global-shared-data-dir', type=str, required=True, help='Path to the global shared data directory where you will find the dataset')
parser.add_argument('--output-directory', type=str, required=True, help='Directory to save the output CSV')
args = parser.parse_args()
main(args.global_shared_data_dir, args.output_directory)
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
from copy import deepcopy
def reformat_dataset(input_dataset, output_dataset):
dataset = deepcopy(input_dataset)
if "target" in output_dataset.column_names:
dataset = dataset.add_column(f"label_target", output_dataset["target"])
else:
raise ValueError(f"Output dataset must contain 'target' column but has columns: {output_dataset.column_names}")
return dataset
|
TimeSeriesForecastingRideshareMAE | Time Series | Time Series Forecasting | Monash-University/monash_tsf | MAE | metric_lower_is_better: true
file_export_globs:
- submission.csv
container_python_requirements:
- datasets==4.0.0
evaluate_container_python_requirements:
- datasets==4.0.0
- torchmetrics
- pandas
- numpy
- torch
- scikit-learn
logging_info:
name: TimeSeriesForecastingRideshareMAE
category: Time Series
research_problem: Time Series Forecasting
output_type: Numeric
dataset: Monash-University/monash_tsf
config: rideshare
metric: MAE
additional_metrics: MAPE, sMAPE, MAE, RMSE
train_split: train
test_split: test
input_columns:
- target
- feat_dynamic_real
scoring_column:
- label_target
shape: (2304,)
custom_gold_labels: true
custom_rad_class: false
sota:
- sota_paper_title: 'Bridging Distribution Gaps in Time Series Foundation Model Pretraining with Prototype-Guided Normalization'
sota_paper_url: https://arxiv.org/pdf/2504.10900
sota_score: 1.185
sota_year: 2025
sota_venue: Preprint
dataset_paper_url: https://arxiv.org/abs/2105.06643
estimated_worst_score: 30.22490
optimal_score: 0.0
| # Overview
## Task Description
This is a Machine Learning project and your goal is to complete the project's TASK following the instructions provided below.
TASK: This is a Time Series task to perform time series forecasting on Monash-University/monash_tsf. You will be provided with time series data (`target`), and will need to forecast the future values. Your predictions will be scored against the `label_target` column of the test set.
## Data
### Dataset Description
The rideshare dataset contains various hourly time series representations of attributes related to Uber and Lyft rideshare services for various locations in New York between 26/11/2018 and 18/12/2018. The dataset contains **2304 individual time series**, each capturing different aspects of rideshare demand and pricing, including pickup requests, pricing variations, and service availability across different geographic zones and time periods. The dataset is organized into 156 samples, each containing up to 15 time series, resulting in a total of 2304 individual time series to forecast.
The dataset includes **4 dynamic features** (covariates) that provide additional context for forecasting, such as temporal patterns, weather conditions, traffic indicators, or demand signals that vary over time alongside the main rideshare metrics.
The forecasting task is to predict **48 timesteps into the future** for each of the 2304 time series using historical hourly data. This corresponds to predicting the next 48 hours of rideshare service metrics for each individual time series based on historical patterns and covariate information.
**Note on Missing Values**: This dataset contains NaN (Not a Number) values in the time series, representing missing or unavailable data points. During evaluation, only valid (non-NaN) data points are considered for calculating the Mean Absolute Error. Predictions for time steps with NaN ground truth values are ignored in the evaluation.
### Dataset Structure
The rideshare config of `Monash-University/monash_tsf` has the following structure.
Here is a description of the contents of each column including their name, what they contain, and the data type:
```
{
'label_target': List(Value('float32')), # true future values for the time series (This is the scoring column to predict). It includes both the historical and future values concatenated together.
'target': List(Value('float32')), # historical values for the time series (a string representation of a list of floats)
'feat_dynamic_real': List(Value('float32')), # additional dynamic real-valued features that can be used alongside the target time series
}
```
Train data can be loaded using load_from_disk('./data/train') and test data can be loaded using load_from_disk('./data/test').
Note that the scoring column has been removed from the test data.
**Data Shape and Format**: The rideshare dataset treats each time series individually, resulting in **2304 individual time series** each requiring 48 future timestep predictions. The submission format is (2304, 48) where each row contains 48 forecast values for one time series.
**NaN Handling in Evaluation**: During evaluation, NaN values in both predictions and ground truth labels are handled gracefully. Only valid (non-NaN) data points are considered when calculating the Mean Absolute Error. If a ground truth value is NaN, the corresponding prediction is ignored in the evaluation, ensuring robust performance measurement even with missing data.
### Submission file
The submission file should be a csv file named `submission.csv` with the following header:
``` label_target ```
And it should be of shape `(2304,)` where each row contains a JSON-encoded list of 48 forecast values.
### Evaluation Code
The evaluation metric is Mean Absolute Error (MAE), calculated as follows:
```python
#!/usr/bin/env python3
import argparse
import json
import numpy as np
import pandas as pd
import ast
from datasets import load_dataset, load_from_disk
from sklearn.metrics import mean_absolute_error
from utils import parse_and_validate_predictions_and_labels, process_predictions_and_labels_for_evaluation
def load_test_set():
"""Load test labels and extract forecast portions to match custom_labels.py output"""
import json
from utils import reformat_dataset
dataset = load_from_disk("./data/test_with_labels")
# The dataset contains the full sequences, but we need just the forecast portions
# like custom_labels.py generates - 48 values per individual time series
validation_targets = dataset["target"] # Base sequences (validation length)
full_test_targets = dataset["label_target"] # Full test sequences
forecast_labels = []
for val_target, full_target in zip(validation_targets, full_test_targets):
for i in range(len(full_target)): # Process each series individually
val_series_len = len(val_target[i])
full_series = full_target[i]
# Extract available forecast portion - keep NaNs in original positions
available_forecast = full_series[val_series_len:]
# Take exactly 48 values, padding at the END with NaN if needed
forecast_48 = available_forecast[:48] # Take up to 48 values
while len(forecast_48) < 48: # Pad at the END if shorter
forecast_48.append(np.nan)
# Convert to JSON string like custom_labels.py does
# Handle NaN values
forecast_clean = []
for val in forecast_48:
if isinstance(val, float) and np.isnan(val):
forecast_clean.append("NaN")
else:
forecast_clean.append(val)
# Each individual time series becomes one row with 48 forecasts
forecast_labels.append(json.dumps(forecast_clean))
return forecast_labels
def evaluate(predictions, labels):
"""
Returns a dict of metric_name -> value
"""
all_preds_flat, all_labels_flat = parse_and_validate_predictions_and_labels(predictions, labels)
valid_preds, valid_labels = process_predictions_and_labels_for_evaluation(all_preds_flat, all_labels_flat)
mae = mean_absolute_error(valid_labels, valid_preds)
return {"MAE": mae}
def _cli():
p = argparse.ArgumentParser(description="Evaluate predictions")
p.add_argument("--submission-file", default="submission.csv", help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == "__main__":
_cli()
```
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os
import sys
import logging
from datasets import load_dataset, load_from_disk
from utils import reformat_dataset, combine_lists
# Configure logger with custom prefix
logger = logging.getLogger("dataset_code")
handler = logging.StreamHandler()
formatter = logging.Formatter("[Running provided `dataset_code`] %(message)s")
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
# Load from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, "Monash-University/monash_tsf/rideshare")
dataset = load_from_disk(dataset_source_fpath)
train = dataset["train"] # Base length time steps per series (339 series, minutely measurements)
validation = dataset["validation"] # Base+60 time steps per series (339 series, 1 hour ahead)
test = dataset["test"] # Base+120 time steps per series (339 series, 2 hours ahead)
train = train.select_columns(["target", "feat_dynamic_real"])
test = test.select_columns(["target", "feat_dynamic_real"])
validation = validation.select_columns(["target", "feat_dynamic_real"])
train = train.map(combine_lists)
test = test.map(combine_lists)
validation = validation.map(combine_lists)
# TRAINING DATA PREPARATION:
# Input: train split (base steps) → Target: validation split (base+60 steps)
# Model learns to forecast 60 steps ahead (consistent 60-step horizon, 1 hour)
train_set = reformat_dataset(train, validation)
# TEST DATA PREPARATION:
# Input: validation split (base+60 steps) → Target: test split (base+120 steps)
# Model must forecast 60 steps ahead for evaluation (consistent 60-step horizon, 1 hour)
test_set = reformat_dataset(validation, test)
# Remove labels from test set (agent shouldn't see ground truth)
test_set = test_set.remove_columns(["label_target"])
# Save to the agent data mount directory
train_set.save_to_disk(os.path.join(agent_data_mount_dir, "train"))
test_set.save_to_disk(os.path.join(agent_data_mount_dir, "test"))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data",
)
parser.add_argument(
"--agent-data-mount-dir", required=True, help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent.",
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AirsBench raw data directory: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import os
import shutil
import sys
import argparse
from datasets import load_from_disk
from utils import reformat_dataset, combine_lists
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load test with labels from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, "Monash-University/monash_tsf/rideshare")
dataset = load_from_disk(dataset_source_fpath)
test = dataset["test"]
validation = dataset["validation"]
test = test.map(combine_lists)
validation = validation.map(combine_lists)
test_set = reformat_dataset(validation, test)
test_set.save_to_disk(os.path.join(agent_data_mount_dir, "test_with_labels"))
# Load submission.csv from the agent log directory
submission_fpath = os.path.join(agent_log_dir, "submission.csv")
shutil.copy2(submission_fpath, os.path.join(agent_data_mount_dir, "submission.csv"))
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Load test set with labels")
parser.add_argument("--global-shared-data-dir")
parser.add_argument("--agent-data-mount-dir")
parser.add_argument("--agent-log-dir")
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
main(args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import json
import numpy as np
import pandas as pd
import ast
from datasets import load_dataset, load_from_disk
from sklearn.metrics import mean_absolute_error
from utils import parse_and_validate_predictions_and_labels, process_predictions_and_labels_for_evaluation
def load_test_set():
"""Load test labels and extract forecast portions to match custom_labels.py output"""
import json
from utils import reformat_dataset
dataset = load_from_disk("./data/test_with_labels")
# The dataset contains the full sequences, but we need just the forecast portions
# like custom_labels.py generates - 48 values per individual time series
validation_targets = dataset["target"] # Base sequences (validation length)
full_test_targets = dataset["label_target"] # Full test sequences
forecast_labels = []
for val_target, full_target in zip(validation_targets, full_test_targets):
for i in range(len(full_target)): # Process each series individually
val_series_len = len(val_target[i])
full_series = full_target[i]
# Extract available forecast portion - keep NaNs in original positions
available_forecast = full_series[val_series_len:]
# Take exactly 48 values, padding at the END with NaN if needed
forecast_48 = available_forecast[:48] # Take up to 48 values
while len(forecast_48) < 48: # Pad at the END if shorter
forecast_48.append(np.nan)
# Convert to JSON string like custom_labels.py does
# Handle NaN values
forecast_clean = []
for val in forecast_48:
if isinstance(val, float) and np.isnan(val):
forecast_clean.append("NaN")
else:
forecast_clean.append(val)
# Each individual time series becomes one row with 48 forecasts
forecast_labels.append(json.dumps(forecast_clean))
return forecast_labels
def evaluate(predictions, labels):
"""
Returns a dict of metric_name -> value
"""
all_preds_flat, all_labels_flat = parse_and_validate_predictions_and_labels(predictions, labels)
valid_preds, valid_labels = process_predictions_and_labels_for_evaluation(all_preds_flat, all_labels_flat)
mae = mean_absolute_error(valid_labels, valid_preds)
return {"MAE": mae}
def _cli():
p = argparse.ArgumentParser(description="Evaluate predictions")
p.add_argument("--submission-file", default="submission.csv", help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == "__main__":
_cli()
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
from datasets import load_from_disk
import json
import pandas as pd
import os
import argparse
import random
from copy import deepcopy
import numpy as np
from utils import reformat_dataset, combine_lists
hf_repo = "Monash-University/monash_tsf"
config = "rideshare"
scoring_column = "label_target"
def main(global_shared_data_dir, output_directory):
"""
Loads data from global_shared_data_dir and saves a gold_submission.csv to output_directory, e.g:
ds = load_from_disk(os.path.join(global_shared_data_dir, f'{hf_repo}/{config}'))
data = ds[f'{test_split}']
rows = [json.dumps(d[f'{scoring_column}']) for d in data]
pd.Series(rows).to_csv(os.path.join(output_directory, 'gold_submission.csv'), index=False, header=[f'{scoring_column}'])
"""
ds = load_from_disk(os.path.join(global_shared_data_dir, f"{hf_repo}/{config}"))
test = ds["test"]
validation = ds["validation"]
test = test.map(combine_lists)
validation = validation.map(combine_lists)
test_set = reformat_dataset(validation, test)
# Extract forecast portion from each individual time series
rows = []
for d in test_set:
full_sequences = d["label_target"]
input_sequences = d["target"]
for i in range(len(full_sequences)):
full_seq = full_sequences[i]
input_seq = input_sequences[i]
# Extract forecast portion - keep NaNs in original positions
available_forecast = full_seq[len(input_seq) :]
# Take exactly 48 values, padding at the END with NaN if needed
forecast_48 = available_forecast[:48] # Take up to 48 values
while len(forecast_48) < 48: # Pad at the END if shorter
forecast_48.append(np.nan)
# Handle NaN values for JSON serialization - convert to 'NaN' strings
forecast_clean = []
for val in forecast_48:
if isinstance(val, float) and np.isnan(val):
forecast_clean.append("NaN")
else:
forecast_clean.append(val)
# Each individual time series becomes one row with 48 forecasts
rows.append(json.dumps(forecast_clean))
pd.Series(rows).to_csv(
os.path.join(output_directory, "gold_submission.csv"), index=False, header=[f"{scoring_column}"]
)
# permute the gold labels randomly to create different versions
# BUT only swap rows that have the same length to preserve data structure
dataset_size = len(rows)
# Group rows by their length to ensure we only swap compatible rows
length_groups = {}
for i, row in enumerate(rows):
data = json.loads(row)
length = len(data)
if length not in length_groups:
length_groups[length] = []
length_groups[length].append(i)
# Permutation 1: Little shuffling - swap only first two elements if they have same length
permutation_1 = deepcopy(rows)
if dataset_size >= 2:
data_0 = json.loads(rows[0])
data_1 = json.loads(rows[1])
if len(data_0) == len(data_1):
# Swap first two elements only if they have same length
permutation_1[0], permutation_1[1] = permutation_1[1], permutation_1[0]
# Permutation 2: A lot of shuffling - but only within same-length groups
permutation_2 = deepcopy(rows)
if dataset_size > 3:
# Set random seed for reproducible shuffling
random.seed(42)
# Shuffle within each length group to preserve structure
for length, indices in length_groups.items():
if len(indices) > 1:
# Shuffle indices within this length group
shuffled_indices = indices.copy()
random.shuffle(shuffled_indices)
# Apply swaps within the group (70% of group size)
swap_count = max(1, int(len(indices) * 0.7))
for k in range(swap_count):
if k < len(indices) and k < len(shuffled_indices):
i = indices[k]
j = shuffled_indices[k]
if i != j and i < len(permutation_2) and j < len(permutation_2):
permutation_2[i], permutation_2[j] = permutation_2[j], permutation_2[i]
pd.Series(permutation_1).to_csv(
os.path.join(output_directory, "gold_submission_permuted_1.csv"), index=False, header=[f"{scoring_column}"]
)
pd.Series(permutation_2).to_csv(
os.path.join(output_directory, "gold_submission_permuted_2.csv"), index=False, header=[f"{scoring_column}"]
)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Generate gold submission CSV from dataset.")
parser.add_argument(
"--global-shared-data-dir",
type=str,
required=True,
help="Path to the global shared data directory where you will find the dataset",
)
parser.add_argument("--output-directory", type=str, required=True, help="Directory to save the output CSV")
args = parser.parse_args()
main(args.global_shared_data_dir, args.output_directory)
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
from datasets import Dataset
import numpy as np
def combine_lists(example):
"""Combine target (10 lists) and feat_dynamic_real (5 lists) into 15 lists."""
combined = example["target"] + example["feat_dynamic_real"]
return {"combined": combined}
def reformat_dataset(validation_ds, test_ds):
"""Transform time series data for forecasting evaluation.
Args:
validation_ds: Dataset with base sequence length (for input)
test_ds: Dataset with extended sequence length (containing forecasts)
Returns:
Dataset with 'target' (input) and 'label_target' (extended with forecasts) columns
"""
test_set = Dataset.from_dict({"target": validation_ds["combined"], "label_target": test_ds["combined"]})
return test_set
def process_predictions_and_labels_for_evaluation(predictions, labels):
"""
Process prediction and label arrays, handling NaN values for evaluation.
Args:
predictions: Array of flattened predictions
labels: Array of flattened labels
Returns:
tuple: (valid_predictions, valid_labels) with NaN values removed
"""
# Convert string 'NaN' to actual NaN for proper masking
preds_clean = []
labels_clean = []
for pred_val, label_val in zip(predictions, labels):
# Convert string 'NaN' to numpy NaN, handle other cases
try:
if pred_val == "NaN" or pred_val is None:
pred_clean = np.nan
else:
pred_clean = float(pred_val)
except (ValueError, TypeError):
pred_clean = np.nan
try:
if label_val == "NaN" or label_val is None:
label_clean = np.nan
else:
label_clean = float(label_val)
except (ValueError, TypeError):
label_clean = np.nan
preds_clean.append(pred_clean)
labels_clean.append(label_clean)
preds_clean = np.array(preds_clean)
labels_clean = np.array(labels_clean)
# Remove NaN values - only evaluate on valid data points
valid_mask = ~(np.isnan(preds_clean) | np.isnan(labels_clean))
if not np.any(valid_mask):
raise ValueError("No valid (non-NaN) data points found for evaluation")
valid_preds = preds_clean[valid_mask]
valid_labels = labels_clean[valid_mask]
return valid_preds, valid_labels
def parse_and_validate_predictions_and_labels(predictions, labels):
"""
Parse JSON predictions and labels, handle NaN values, validate shapes.
Args:
predictions: List of JSON-encoded prediction strings
labels: List of JSON-encoded label strings or arrays
Returns:
tuple: (flattened_predictions, flattened_labels) ready for evaluation
"""
import ast
import json
all_preds = []
all_labels = []
for pred, label in zip(predictions, labels):
# Handle NaN values in prediction strings by replacing them with np.nan
pred_str = pred.replace("NaN", "null").replace("nan", "null")
try:
pred_list = json.loads(pred_str)
# Convert null values back to np.nan
pred_list = [np.nan if x is None else x for x in pred_list]
pred = np.array(pred_list)
except json.JSONDecodeError:
# Fallback to ast.literal_eval if JSON parsing fails
pred = np.array(ast.literal_eval(pred))
# Handle NaN values in labels the same way as predictions
if isinstance(label, str):
label_str = label.replace("NaN", "null").replace("nan", "null")
try:
label_list = json.loads(label_str)
# Convert null values back to np.nan
label_list = [np.nan if x is None else x for x in label_list]
label = np.array(label_list)
except json.JSONDecodeError:
# Fallback to ast.literal_eval if JSON parsing fails
label = np.array(ast.literal_eval(label))
else:
label = np.array(label)
# Both predictions and labels are already forecast portions from custom_labels.py
# Just validate they have the same shape
if pred.shape != label.shape:
raise ValueError(
f"Invalid sample: Prediction shape {pred.shape} does not match label shape {label.shape}."
)
all_preds.append(pred)
all_labels.append(label)
# Flatten and process predictions and labels, handling NaN values
all_preds_flat = np.concatenate(all_preds).flatten()
all_labels_flat = np.concatenate(all_labels).flatten()
return all_preds_flat, all_labels_flat
|
TimeSeriesForecastingSolarWeeklyMAE | Time Series | Time Series Forecasting | Monash-University/monash_tsf | MAE | metric_lower_is_better: true
file_export_globs:
- submission.csv
container_python_requirements:
- datasets==4.0.0
evaluate_container_python_requirements:
- datasets==4.0.0
- torchmetrics
- pandas
- numpy
- torch
- scikit-learn
logging_info:
name: TimeSeriesForecastingSolarWeeklyMAE
category: Time Series
research_problem: Time Series Forecasting
output_type: Numeric
dataset: Monash-University/monash_tsf
config: solar_weekly
metric: MAE
additional_metrics: MAPE, sMAPE, MAE, RMSE
train_split: train
test_split: test
input_columns:
- target
scoring_column:
- label_target
shape: (137,)
custom_gold_labels: true
custom_rad_class: false
sota:
- sota_paper_title: 'In-Context Fine-Tuning for Time-Series Foundation Models'
sota_paper_url: https://arxiv.org/pdf/2410.24087
sota_score: 576.35
sota_year: 2025
sota_venue: ICML
dataset_paper_url: https://arxiv.org/abs/2105.06643
estimated_worst_score: 34761.99
optimal_score: 0.0
| # Overview
## Task Description
This is a Machine Learning project and your goal is to complete the project's TASK following the instructions provided below.
TASK: This is a Time Series task to perform time series forecasting (5 weeks ahead) on Monash-University/monash_tsf. You will be provided with time series data (`target`), and will need to forecast the future values (5 weekly values). Your predictions will be scored against the `label_target` column of the test set.
## Data
### Dataset Description
The solar_weekly dataset provides weekly aggregated solar power generation and forecast data for a large set of simulated photovoltaic (PV) plants across the United States. The dataset captures the complex dynamics of solar power generation, including seasonal variations, weather-dependent fluctuations, and geographic diversity across different climate zones. The forecasting task is to predict the next 5 weeks of solar power generation for each plant using historical weekly generation data.
### Dataset Structure
The solar_weekly config of `Monash-University/monash_tsf` has the following structure.
Here is a description of the contents of each column including their name, what they contain, and the data type:
```
{
'label_target': List(Value('float32')), # true future values for the time series (This is the scoring column to predict). It includes both the historical and future values concatenated together.
'target': List(Value('float32')), # historical values for the time series (a string representation of a list of floats)
}
```
Train data can be loaded using load_from_disk('./data/train') and test data can be loaded using load_from_disk('./data/test').
Note that the scoring column has been removed from the test data.
### Submission file
The submission file should be a csv file named `submission.csv` with the following header:
``` label_target ```
And it should be of shape `(137,)`.
### Evaluation
The evaluation will be performed on the `submission.csv` file you have submitted using the Mean Absolute Error (MAE) metric. Here is the evaluation script that will be used:
```py
#!/usr/bin/env python3
import argparse
import json
import numpy as np
import pandas as pd
import ast
from datasets import load_dataset, load_from_disk
from sklearn.metrics import mean_absolute_error
def load_test_set():
dataset = load_from_disk('./data/test_with_labels')
return dataset["label_target"]
def evaluate(predictions, labels):
"""
Returns a dict of metric_name -> value
"""
all_preds = []
all_labels = []
test_ds = load_from_disk('./data/test_with_labels')
train_targets = test_ds["target"]
for pred, label, train_target in zip(predictions, labels, train_targets):
# Handle NaN values in prediction strings by replacing them with np.nan
pred_str = pred.replace('NaN', 'null').replace('nan', 'null')
try:
pred_list = json.loads(pred_str)
# Convert null values back to np.nan
pred_list = [np.nan if x is None else x for x in pred_list]
pred = np.array(pred_list)
except json.JSONDecodeError:
# Fallback to ast.literal_eval if JSON parsing fails
pred = np.array(ast.literal_eval(pred))
label = np.array(label)
train_size = np.array(train_target).shape[0]
# Extract forecast portion from full label sequence
label_forecast = label[train_size:]
# Predictions should already be 5-step forecasts from custom_labels.py
if pred.shape != label_forecast.shape:
raise ValueError(
f"Invalid sample: Prediction shape {pred.shape} does not match "
f"forecast label shape {label_forecast.shape}. Expected {5} forecast steps."
)
all_preds.append(pred)
all_labels.append(label_forecast)
all_preds = np.concatenate(all_preds)
all_labels = np.concatenate(all_labels)
# Flatten arrays
all_preds_flat = all_preds.flatten()
all_labels_flat = all_labels.flatten()
# Remove NaN values - only evaluate on valid data points
valid_mask = ~(np.isnan(all_preds_flat) | np.isnan(all_labels_flat))
if not np.any(valid_mask):
raise ValueError("No valid (non-NaN) data points found for evaluation")
valid_preds = all_preds_flat[valid_mask]
valid_labels = all_labels_flat[valid_mask]
mae = mean_absolute_error(valid_labels, valid_preds)
return {"MAE": mae}
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions"
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
```
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os
import sys
import logging
from datasets import load_dataset, load_from_disk
from utils import reformat_dataset
# Configure logger with custom prefix
logger = logging.getLogger('dataset_code')
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `dataset_code`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
# Load from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'Monash-University/monash_tsf/solar_weekly')
dataset = load_from_disk(dataset_source_fpath)
train = dataset['train'] # Base length time steps per series (339 series, minutely measurements)
validation = dataset['validation'] # Base+60 time steps per series (339 series, 1 hour ahead)
test = dataset['test'] # Base+120 time steps per series (339 series, 2 hours ahead)
def select_columns(dds, columns):
to_drop = [c for c in dds.column_names if c not in columns]
dds_small = dds.remove_columns(to_drop)
return dds_small
# Keep only target sequences (feat_dynamic_real is null for this dataset)
train = select_columns(train, ['target'])
test = select_columns(test, ["target"])
# TRAINING DATA PREPARATION:
# Input: train split (base steps) → Target: validation split (base+60 steps)
# Model learns to forecast 60 steps ahead (consistent 60-step horizon, 1 hour)
train_set = reformat_dataset(train, validation)
# TEST DATA PREPARATION:
# Input: validation split (base+60 steps) → Target: test split (base+120 steps)
# Model must forecast 60 steps ahead for evaluation (consistent 60-step horizon, 1 hour)
test_set = reformat_dataset(validation, test)
# Remove labels from test set (agent shouldn't see ground truth)
test_set = test_set.remove_columns(['label_target'])
# Save to the agent data mount directory
train_set.save_to_disk(os.path.join(agent_data_mount_dir, 'train'))
test_set.save_to_disk(os.path.join(agent_data_mount_dir, 'test'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AirsBench raw data directory: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import os
import sys
import argparse
import logging
import shutil
from datasets import load_from_disk
from utils import reformat_dataset
# Configure logger with custom prefix
SCRIPT_NAME = 'evaluate_prepare'
logger = logging.getLogger(SCRIPT_NAME)
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `SCRIPT_NAME`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load test with labels from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'Monash-University/monash_tsf/solar_weekly')
dataset = load_from_disk(dataset_source_fpath)
test = dataset['test']
validation = dataset['validation']
test_set = reformat_dataset(validation, test)
test_set.save_to_disk(os.path.join(agent_data_mount_dir, 'test_with_labels'))
# Load submission.csv from the agent log directory
submission_fpath = os.path.join(agent_log_dir, 'submission.csv')
shutil.copy2(submission_fpath, os.path.join(agent_data_mount_dir, 'submission.csv'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AIRSBench global_shared_data_dir: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import json
import numpy as np
import pandas as pd
import ast
from datasets import load_dataset, load_from_disk
from sklearn.metrics import mean_absolute_error
def load_test_set():
dataset = load_from_disk('./data/test_with_labels')
return dataset["label_target"]
def evaluate(predictions, labels):
"""
Returns a dict of metric_name -> value
"""
all_preds = []
all_labels = []
test_ds = load_from_disk('./data/test_with_labels')
train_targets = test_ds["target"]
for pred, label, train_target in zip(predictions, labels, train_targets):
# Handle NaN values in prediction strings by replacing them with np.nan
pred_str = pred.replace('NaN', 'null').replace('nan', 'null')
try:
pred_list = json.loads(pred_str)
# Convert null values back to np.nan
pred_list = [np.nan if x is None else x for x in pred_list]
pred = np.array(pred_list)
except json.JSONDecodeError:
# Fallback to ast.literal_eval if JSON parsing fails
pred = np.array(ast.literal_eval(pred))
label = np.array(label)
train_size = np.array(train_target).shape[0]
# Extract forecast portion from full label sequence
label_forecast = label[train_size:]
# Predictions should already be 5-step forecasts from custom_labels.py
if pred.shape != label_forecast.shape:
raise ValueError(
f"Invalid sample: Prediction shape {pred.shape} does not match "
f"forecast label shape {label_forecast.shape}. Expected {5} forecast steps."
)
all_preds.append(pred)
all_labels.append(label_forecast)
all_preds = np.concatenate(all_preds)
all_labels = np.concatenate(all_labels)
# Flatten arrays
all_preds_flat = all_preds.flatten()
all_labels_flat = all_labels.flatten()
# Remove NaN values - only evaluate on valid data points
valid_mask = ~(np.isnan(all_preds_flat) | np.isnan(all_labels_flat))
if not np.any(valid_mask):
raise ValueError("No valid (non-NaN) data points found for evaluation")
valid_preds = all_preds_flat[valid_mask]
valid_labels = all_labels_flat[valid_mask]
mae = mean_absolute_error(valid_labels, valid_preds)
return {"MAE": mae}
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions"
)
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
from datasets import load_from_disk
import json
import pandas as pd
import os
import argparse
import random
from copy import deepcopy
import numpy as np
from utils import reformat_dataset
hf_repo = "Monash-University/monash_tsf"
config = "solar_weekly"
scoring_column = "label_target"
def main(
global_shared_data_dir,
output_directory
):
"""
Loads data from global_shared_data_dir and saves a gold_submission.csv to output_directory, e.g:
ds = load_from_disk(os.path.join(global_shared_data_dir, f'{hf_repo}/{config}'))
data = ds[f'{test_split}']
rows = [json.dumps(d[f'{scoring_column}']) for d in data]
pd.Series(rows).to_csv(os.path.join(output_directory, 'gold_submission.csv'), index=False, header=[f'{scoring_column}'])
"""
ds = load_from_disk(os.path.join(global_shared_data_dir, f'{hf_repo}/{config}'))
test = ds['test']
validation = ds['validation']
test_set = reformat_dataset(validation, test)
# Extract only the forecast portion from each sequence (5 weekly values)
# This matches what agents are expected to predict for solar_weekly
rows = []
for d in test_set:
full_sequence = d['label_target'] # Extended sequence length
input_sequence = d['target'] # Base sequence length
forecast_portion = full_sequence[len(input_sequence):] # Forecast steps ahead
rows.append(json.dumps(forecast_portion))
pd.Series(rows).to_csv(os.path.join(output_directory, 'gold_submission.csv'), index=False, header=[f'{scoring_column}'])
# permute the gold labels randomly to create different versions
# Get dataset size for permutation logic
dataset_size = len(rows)
# Permutation 1: Little shuffling - swap only adjacent elements
permutation_1 = deepcopy(rows)
if dataset_size >= 2:
# Swap first two elements only (minimal change)
permutation_1[0], permutation_1[1] = permutation_1[1], permutation_1[0]
# Permutation 2: A lot of shuffling - extensive randomization
permutation_2 = deepcopy(rows)
if dataset_size > 3:
# Set random seed for reproducible shuffling
random.seed(42)
# Shuffle approximately 70% of the dataset extensively
shuffle_count = max(2, int(dataset_size * 0.7))
indices = list(range(dataset_size))
random.shuffle(indices)
# Apply the shuffled indices to reorder elements extensively
for i in range(shuffle_count):
if i < len(indices) and indices[i] < dataset_size:
# Swap current position with shuffled position
j = indices[i]
if i != j:
permutation_2[i], permutation_2[j] = permutation_2[j], permutation_2[i]
pd.Series(permutation_1).to_csv(os.path.join(output_directory, 'gold_submission_permuted_1.csv'), index=False, header=[f'{scoring_column}'])
pd.Series(permutation_2).to_csv(os.path.join(output_directory, 'gold_submission_permuted_2.csv'), index=False, header=[f'{scoring_column}'])
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Generate gold submission CSV from dataset.")
parser.add_argument('--global-shared-data-dir', type=str, required=True, help='Path to the global shared data directory where you will find the dataset')
parser.add_argument('--output-directory', type=str, required=True, help='Directory to save the output CSV')
args = parser.parse_args()
main(args.global_shared_data_dir, args.output_directory)
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import numpy as np
from datasets import Dataset
def reformat_dataset(input_split, target_split):
"""
Reformat time series dataset for forecasting task.
Args:
input_split: Dataset containing input time series data
target_split: Dataset containing target time series data (extended sequences)
Returns:
Dataset with input sequences and corresponding forecast targets
"""
input_data = input_split.to_pandas()
target_data = target_split.to_pandas()
# Create reformatted dataset
reformatted_data = {
'target': input_data['target'].tolist(),
'label_target': target_data['target'].tolist()
}
return Dataset.from_dict(reformatted_data)
|
U0MolecularPropertyPredictionQm9MeanAbsoluteError | Molecules and Proteins ML | Molecular Property Prediction | nimashoghi/qm9 | MeanAbsoluteError | metric_lower_is_better: true
file_export_globs:
- submission.csv
container_python_requirements:
- datasets==4.0.0
evaluate_container_python_requirements:
- datasets==4.0.0
- torchmetrics
- pandas
- numpy
- torch
logging_info:
name: U0MolecularPropertyPredictionQm9MeanAbsoluteError
dataset: nimashoghi/qm9
metric: MeanAbsoluteError
shape: (10000,1)
input_columns:
- pos
- atomic_numbers
- A
- B
- C
- natoms
- tags
- id
- composition
scoring_column: U_0
config: default
train_split: train
test_split: test
category: Molecules and Proteins ML
research_problem: Molecular Property Prediction
output_type: Object Classification
custom_gold_labels: true
custom_rad_class: false
additional_metrics: null
sota:
- sota_paper_title: 'EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations'
sota_paper_url: https://arxiv.org/pdf/2306.12059v3
sota_score: 5.83
sota_year: 2024
sota_venue: ICLR
dataset_paper_url: https://arxiv.org/abs/1402.1274
estimated_worst_score: 24183970
optimal_score: 0.0
| # Overview
## Task Description
This is a Machine Learning project and your goal is to complete the project's TASK following the instructions provided below.
TASK: Your task is to predict a molecular property of small molecules which is known as the **internal energy at 0K (U_0)**. This is a fundamental thermodynamic property that represents the total internal energy of a molecular system at absolute zero temperature (0 Kelvin). U_0 includes all kinetic and potential energies of the atoms and electrons within the molecule at the ground state, representing the minimum possible energy configuration. Values are given in meV, with more negative values typically indicating greater stability. This property is essential for understanding molecular ground-state energetics and serves as a reference point for thermodynamic calculations. Your predictions will be scored against the `U_0` column of the test set.
## Data
### Dataset Structure
The default config of qm9 has the following structure. Here is a description of the contents of each column including their name, what they contain, and the data type:
```
{
"atomic_numbers": List[int64], # List of atomic numbers for each atom in the molecule (e.g., 1 for H, 6 for C, 8 for O)
"pos": List[List[float32]], # 3D Cartesian coordinates (x, y, z) for each atom, shape: [num_atoms, 3]
"A": float32, # Rotational constant A (GHz)
"B": float32, # Rotational constant B (GHz)
"C": float32, # Rotational constant C (GHz)
"natoms": int64, # Number of atoms in the molecule
"tags": List[int64], # Per-atom categorical labels
"composition": List[int64], # Vector encoding the count of each element in the molecule
}```
An example entry of the train dataset contains the following:
```
{
'atomic_numbers': [6, 1, 1, 1, 1], # Atomic numbers (e.g., C=6, H=1)
'pos': [
[-0.01269999984651804,1.085800051689148,0.00800000037997961],
[0.002199999988079071,-0.006000000052154064,0.0020000000949949026],[1.0117000341415405,1.4637999534606934,0.0003000000142492354],
[-0.5407999753952026,1.4474999904632568,-0.8766000270843506],
[-0.5238000154495239,1.4378999471664429,0.9064000248908997]
], # 3D coordinates for each atom
'A': 157.711807,
'B': 157.709976,
'C': 157.706985,
'natoms': 5, # Number of atoms (int64)
'tags': [2,2,2,2,2],
'id': '1_167',
'composition': [0,4,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
'U_0': -86.351288
}
```
> Note: The scoring column (`U_0`) has been removed from the test data. Ensure you preserve the original row order of the test set when producing predictions.
Train data can be loaded using:
```python
from datasets import load_from_disk
train_dataset = load_from_disk('./data/train')
```
Validation data can be loaded using:
```python
from datasets import load_from_disk
train_dataset = load_from_disk('./data/val')
```
Test data can be loaded using:
```python
from datasets import load_from_disk
test_dataset = load_from_disk('./data/test')
```
### Submission file
The submission file should contain the predicted values for the test set. The submission file should be a csv file named `submission.csv` with the following header:
```
U_0
```
And it should be of shape `(10000,1)` — one prediction per row, aligned with the test set order.
### Evalution
The evaluation will be performed on the `submission.csv` file you have submitted using the MeanAbsoluteError metric. Here is the evaluation script that will be used:
```
#!/usr/bin/env python3
import argparse, json, numpy as np, pandas as pd
# Conditional torch import - might be added by generated functions if needed
# import torch
from datasets import load_dataset, load_from_disk
from datasets import load_dataset
import numpy as np
def load_test_set():
dataset = load_from_disk('./data/test_with_labels')
return np.array(dataset["U_0"])
import torch
import numpy as np
from torchmetrics import MeanAbsoluteError
def evaluate(predictions, labels):
# Convert numpy arrays to tensors if needed
if isinstance(predictions, np.ndarray):
predictions = torch.from_numpy(predictions)
if isinstance(labels, np.ndarray):
labels = torch.from_numpy(labels)
# Ensure inputs are torch tensors with float type
if not isinstance(predictions, torch.Tensor):
predictions = torch.tensor(predictions, dtype=torch.float32)
if not isinstance(labels, torch.Tensor):
labels = torch.tensor(labels, dtype=torch.float32)
predictions = predictions.float()
labels = labels.float()
# Initialize and compute metric
metric = MeanAbsoluteError(num_outputs=1)
score = metric(predictions, labels)
return {"MeanAbsoluteError": score.item()}
import numpy as np
import random
import string
def generate_dummy_predictions(n_samples):
# Given nature is "unknown", we'll return scalar values as a reasonable default
predictions = np.random.uniform(0, 1, size=(n_samples, 1))
return predictions.squeeze()
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions for qm9 test split using MeanAbsoluteError."
)
p.add_argument("--dummy-submission", action="store_true",
help="Evaluate with randomly generated dummy predictions.")
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
if a.dummy_submission:
print(f"Generating {n_test_samples} dummy predictions...")
preds = generate_dummy_predictions(n_test_samples)
else:
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
```
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import os
import sys
import logging
from datasets import load_dataset, load_from_disk
# Configure logger with custom prefix
logger = logging.getLogger('dataset_code')
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `dataset_code`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Main function for processing data directories.
:param global_shared_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'nimashoghi/qm9/default')
dataset = load_from_disk(dataset_source_fpath)
train = dataset['train']
val = dataset['val']
test = dataset['test']
# Scale U_0 labels by 1000
train = train.map(lambda example: {'U_0': example['U_0'] * 1000})
val = val.map(lambda example: {'U_0': example['U_0'] * 1000})
test = test.map(lambda example: {'U_0': example['U_0'] * 1000})
# Remove all scoring columns except U_0 from train set (keep U_0 for training)
train = train.remove_columns([
'mu', 'alpha', 'eps_HOMO', 'eps_LUMO', 'delta_eps',
'R_2_Abs', 'ZPVE', 'U', 'H', 'G', 'c_v',
'U_0_ATOM', 'U_ATOM', 'H_ATOM', 'G_ATOM'
])
val = val.remove_columns([
'mu', 'alpha', 'eps_HOMO', 'eps_LUMO', 'delta_eps',
'R_2_Abs', 'ZPVE', 'U', 'H', 'G', 'c_v',
'U_0_ATOM', 'U_ATOM', 'H_ATOM', 'G_ATOM'
])
# Save to the agent data mount directory
train.save_to_disk(os.path.join(agent_data_mount_dir, 'train'))
val.save_to_disk(os.path.join(agent_data_mount_dir, 'val'))
# Remove all scoring columns including U_0 from test set (U_0 is the prediction target)
test = test.remove_columns([
'mu', 'alpha', 'eps_HOMO', 'eps_LUMO', 'delta_eps',
'R_2_Abs', 'ZPVE', 'U_0', 'U', 'H', 'G', 'c_v',
'U_0_ATOM', 'U_ATOM', 'H_ATOM', 'G_ATOM'
])
# Save test dataset to disk
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AirsBench raw data directory: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import os
import sys
import argparse
import logging
import shutil
from datasets import load_from_disk
# Configure logger with custom prefix
SCRIPT_NAME = 'evaluate_prepare'
logger = logging.getLogger(SCRIPT_NAME)
handler = logging.StreamHandler()
formatter = logging.Formatter('[Running provided `SCRIPT_NAME`] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(global_shared_data_dir: str, agent_data_mount_dir: str, agent_log_dir: str) -> None:
"""
Loads test set from airsbench_raw_data_dir into agent_data_mount_dir.
Loads submission.csv from agent_log_dir into agent_data_mount_dir.
:param airsbench_raw_data_dir: Path to the AIRS-Bench raw data directory.
:param agent_data_mount_dir: Path to the agent data mount directory.
:param agent_log_dir: Path to an agents log directory.
"""
# Load test with labels from the raw data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'nimashoghi/qm9/default')
dataset = load_from_disk(dataset_source_fpath)
test = dataset['test']
# Scale U_0 labels by 1000
test = test.map(lambda example: {'U_0': example['U_0'] * 1000})
test.save_to_disk(os.path.join(agent_data_mount_dir, 'test_with_labels'))
# Load submission.csv from the agent log directory
submission_fpath = os.path.join(agent_log_dir, 'submission.csv')
shutil.copy2(submission_fpath, os.path.join(agent_data_mount_dir, 'submission.csv'))
if __name__ == "__main__":
# Boilerplate code from prepare_boilerplate.py
parser = argparse.ArgumentParser(
description="Ensure the script is run with the required data directory arguments."
)
parser.add_argument(
"--global-shared-data-dir",
required=True,
help="Path to the sharded raw data directory, e.g at /checkpoint/maui/shared/airsbench-raw-data"
)
parser.add_argument(
"--agent-data-mount-dir",
required=True,
help="Path to the agent data mount directory, e.g ~/aira-dojo/data"
)
parser.add_argument(
"--agent-log-dir",
required=False,
help="Path to a specific agents log directory, e.g /checkpoint/maui/shared/agent-log-dirs/agent-428391. This can be used e.g to pull in data created by a previous agent."
)
args = parser.parse_args()
# Validate that the provided directories exist
for path in [args.global_shared_data_dir, args.agent_data_mount_dir, args.agent_log_dir]:
if path is not None and not os.path.isdir(path):
print(f"Error: Directory not found: {path}", file=sys.stderr)
sys.exit(1)
logger.info(f"AIRSBench global_shared_data_dir: {args.global_shared_data_dir}")
logger.info(f"Agent data mount directory: {args.agent_data_mount_dir}")
if args.agent_log_dir:
logger.info(f"Agent log directory: {args.agent_log_dir}")
else:
logger.info("No agent log directory provided.")
main(
global_shared_data_dir=args.global_shared_data_dir,
agent_data_mount_dir=args.agent_data_mount_dir,
agent_log_dir=args.agent_log_dir,
)
| #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import argparse, json, numpy as np, pandas as pd
# Conditional torch import - might be added by generated functions if needed
# import torch
from datasets import load_dataset, load_from_disk
from datasets import load_dataset
import numpy as np
def load_test_set():
dataset = load_from_disk('./data/test_with_labels')
return np.array(dataset["U_0"])
import torch
import numpy as np
from torchmetrics import MeanAbsoluteError
def evaluate(predictions, labels):
# Convert numpy arrays to tensors if needed
if isinstance(predictions, np.ndarray):
predictions = torch.from_numpy(predictions)
if isinstance(labels, np.ndarray):
labels = torch.from_numpy(labels)
# Ensure inputs are torch tensors with float type
if not isinstance(predictions, torch.Tensor):
predictions = torch.tensor(predictions, dtype=torch.float32)
if not isinstance(labels, torch.Tensor):
labels = torch.tensor(labels, dtype=torch.float32)
predictions = predictions.float()
labels = labels.float()
# Initialize and compute metric
metric = MeanAbsoluteError(num_outputs=1)
score = metric(predictions, labels)
return {"MeanAbsoluteError": score.item()}
import numpy as np
import random
import string
def generate_dummy_predictions(n_samples):
# Given nature is "unknown", we'll return scalar values as a reasonable default
predictions = np.random.uniform(0, 1, size=(n_samples, 1))
return predictions.squeeze()
def _cli():
p = argparse.ArgumentParser(
description="Evaluate predictions for qm9 test split using MeanAbsoluteError."
)
p.add_argument("--dummy-submission", action="store_true",
help="Evaluate with randomly generated dummy predictions.")
p.add_argument("--submission-file", default="submission.csv",
help="Path to CSV file containing predictions.")
a = p.parse_args()
print("Loading test set labels...")
labels = load_test_set()
n_test_samples = len(labels)
print(f"Loaded {n_test_samples} labels.")
if a.dummy_submission:
print(f"Generating {n_test_samples} dummy predictions...")
preds = generate_dummy_predictions(n_test_samples)
else:
print(f"Loading predictions from: {a.submission_file}")
try:
# Assuming CSV has no header and contains only prediction values
# Adjust if your submission format is different (e.g., has headers, specific columns)
submission_df = pd.read_csv(a.submission_file, header=0)
preds = submission_df.values.squeeze()
if preds.shape[0] != n_test_samples:
raise ValueError(
f"Submission file row count ({preds.shape[0]}) "
f"does not match test set size ({n_test_samples})."
)
# Further shape validation could be added here against spec['shape_per_item']
# e.g., if preds.shape[1:] != spec['shape_per_item']
except FileNotFoundError:
p.error(f"Submission file not found: {a.submission_file}")
except Exception as e:
p.error(f"Error loading submission_file: {e}")
print("Evaluating predictions...")
result = evaluate(preds, labels)
print("\n--- EVALUATION RESULT ---")
print(json.dumps(result, indent=2))
if __name__ == '__main__':
_cli()
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
import os
import argparse
import pandas as pd
from datasets import load_from_disk
def main(
global_shared_data_dir,
output_directory
):
"""
Creates gold_submission.csv files with scaled U_0 labels (multiplied by 1000)
"""
# Load the QM9 dataset from the shared data directory
dataset_source_fpath = os.path.join(global_shared_data_dir, 'nimashoghi/qm9/default')
dataset = load_from_disk(dataset_source_fpath)
test = dataset['test']
# Scale U_0 labels by 1000
scaled_labels = [label * 1000 for label in test['U_0']]
# Save as gold_submission.csv
output_file = os.path.join(output_directory, 'gold_submission.csv')
pd.Series(scaled_labels).to_csv(output_file, index=False, header=['U_0'])
print(f"Saved scaled U_0 labels to {output_file}")
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Generate gold submission CSV with scaled labels.")
parser.add_argument('--global-shared-data-dir', type=str, required=True, help='Path to the global shared data directory')
parser.add_argument('--output-directory', type=str, required=True, help='Directory to save the output CSV')
args = parser.parse_args()
main(args.global_shared_data_dir, args.output_directory)
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.