Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: CastError
Message: Couldn't cast
repo: string
instance_id: string
base_commit: string
patch: string
test_patch: string
problem_statement: string
hint_text: null
created_at: string
closed_at: string
version: string
FAIL_TO_PASS: list<element: string>
child 0, element: string
PASS_TO_PASS: list<element: string>
child 0, element: string
environment_setup_commit: null
command_build: string
command_test: string
image_name: string
meta: struct<issue_number: int64, merge_commit: string, merged_at: string, pr_number: int64, score: struct<complexity: int64, task_correctness: int64, test_completeness: int64, test_correctness: int64>, timeout_build: int64, timeout_test: int64, type: string, url: struct<diff: string, issue: string, pr: string>>
child 0, issue_number: int64
child 1, merge_commit: string
child 2, merged_at: string
child 3, pr_number: int64
child 4, score: struct<complexity: int64, task_correctness: int64, test_completeness: int64, test_correctness: int64>
child 0, complexity: int64
child 1, task_correctness: int64
child 2, test_completeness: int64
child 3, test_correctness: int64
child 5, timeout_build: int64
child 6, timeout_test: int64
child 7, type: string
child 8, url: struct<diff: string, issue: string, pr: string>
child 0, diff: string
child 1, issue: string
child 2, pr: string
sample_type: string
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 2451
to
{'repo': Value('string'), 'instance_id': Value('string'), 'base_commit': Value('string'), 'patch': Value('string'), 'test_patch': Value('string'), 'problem_statement': Value('null'), 'hint_text': Value('null'), 'created_at': Value('string'), 'closed_at': Value('string'), 'version': Value('string'), 'FAIL_TO_PASS': List(Value('null')), 'PASS_TO_PASS': List(Value('string')), 'environment_setup_commit': Value('null'), 'command_build': Value('string'), 'command_test': Value('string'), 'image_name': Value('string'), 'meta': {'issue_number': Value('int64'), 'merge_commit': Value('string'), 'merged_at': Value('string'), 'pr_number': Value('int64'), 'score': {'complexity': Value('int64'), 'task_correctness': Value('int64'), 'test_completeness': Value('int64'), 'test_correctness': Value('int64')}, 'timeout_build': Value('int64'), 'timeout_test': Value('int64'), 'type': Value('string'), 'url': {'diff': Value('string'), 'issue': Value('string'), 'pr': Value('string')}}}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1405, in compute_config_parquet_and_info_response
fill_builder_info(builder, hf_endpoint=hf_endpoint, hf_token=hf_token, validate=validate)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 578, in fill_builder_info
) = retry_validate_get_features_num_examples_size_and_compression_ratio(
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 497, in retry_validate_get_features_num_examples_size_and_compression_ratio
validate(pf)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 535, in validate
raise TooBigRowGroupsError(
worker.job_runners.config.parquet_and_info.TooBigRowGroupsError: Parquet file has too big row groups. First row group has 933526924 which exceeds the limit of 300000000
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1815, in _prepare_split_single
for _, table in generator:
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 692, in wrapped
for item in generator(*args, **kwargs):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 106, in _generate_tables
yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 73, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
repo: string
instance_id: string
base_commit: string
patch: string
test_patch: string
problem_statement: string
hint_text: null
created_at: string
closed_at: string
version: string
FAIL_TO_PASS: list<element: string>
child 0, element: string
PASS_TO_PASS: list<element: string>
child 0, element: string
environment_setup_commit: null
command_build: string
command_test: string
image_name: string
meta: struct<issue_number: int64, merge_commit: string, merged_at: string, pr_number: int64, score: struct<complexity: int64, task_correctness: int64, test_completeness: int64, test_correctness: int64>, timeout_build: int64, timeout_test: int64, type: string, url: struct<diff: string, issue: string, pr: string>>
child 0, issue_number: int64
child 1, merge_commit: string
child 2, merged_at: string
child 3, pr_number: int64
child 4, score: struct<complexity: int64, task_correctness: int64, test_completeness: int64, test_correctness: int64>
child 0, complexity: int64
child 1, task_correctness: int64
child 2, test_completeness: int64
child 3, test_correctness: int64
child 5, timeout_build: int64
child 6, timeout_test: int64
child 7, type: string
child 8, url: struct<diff: string, issue: string, pr: string>
child 0, diff: string
child 1, issue: string
child 2, pr: string
sample_type: string
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 2451
to
{'repo': Value('string'), 'instance_id': Value('string'), 'base_commit': Value('string'), 'patch': Value('string'), 'test_patch': Value('string'), 'problem_statement': Value('null'), 'hint_text': Value('null'), 'created_at': Value('string'), 'closed_at': Value('string'), 'version': Value('string'), 'FAIL_TO_PASS': List(Value('null')), 'PASS_TO_PASS': List(Value('string')), 'environment_setup_commit': Value('null'), 'command_build': Value('string'), 'command_test': Value('string'), 'image_name': Value('string'), 'meta': {'issue_number': Value('int64'), 'merge_commit': Value('string'), 'merged_at': Value('string'), 'pr_number': Value('int64'), 'score': {'complexity': Value('int64'), 'task_correctness': Value('int64'), 'test_completeness': Value('int64'), 'test_correctness': Value('int64')}, 'timeout_build': Value('int64'), 'timeout_test': Value('int64'), 'type': Value('string'), 'url': {'diff': Value('string'), 'issue': Value('string'), 'pr': Value('string')}}}
because column names don't match
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1428, in compute_config_parquet_and_info_response
parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 994, in stream_convert_to_parquet
builder._prepare_split(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1702, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
repo
string | instance_id
string | base_commit
string | patch
string | test_patch
string | problem_statement
null | hint_text
null | created_at
string | closed_at
string | version
string | FAIL_TO_PASS
list | PASS_TO_PASS
list | environment_setup_commit
null | command_build
string | command_test
string | image_name
string | meta
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ansys/pydyna
|
ansys__pydyna-720
|
f7c50f6978985b174a4e0de2fb78805f730c39a4
|
diff --git a/doc/changelog/720.fixed.md b/doc/changelog/720.fixed.md
new file mode 100644
index 00000000..c2563475
--- /dev/null
+++ b/doc/changelog/720.fixed.md
@@ -0,0 +1 @@
+fix: deck.get() in the presence of Encrypted keywords
\ No newline at end of file
diff --git a/src/ansys/dyna/core/lib/deck.py b/src/ansys/dyna/core/lib/deck.py
index c384fe80..ec85c1c5 100644
--- a/src/ansys/dyna/core/lib/deck.py
+++ b/src/ansys/dyna/core/lib/deck.py
@@ -393,7 +393,7 @@ class Deck:
>>>deck.get_kwds_by_type("SECTION")
"""
- return filter(lambda kwd: not isinstance(kwd, str) and kwd.keyword == type, self._keywords)
+ return filter(lambda kwd: isinstance(kwd, KeywordBase) and kwd.keyword == type, self._keywords)
def get_section_by_id(self, id: int) -> typing.Optional[KeywordBase]:
"""Get the SECTION keyword in the collection for a given section ID.
|
diff --git a/tests/test_deck.py b/tests/test_deck.py
index 4286e2b2..cd3528a2 100644
--- a/tests/test_deck.py
+++ b/tests/test_deck.py
@@ -459,9 +459,11 @@ def test_deck_encrypted_import_expand(file_utils):
"""Import an encrypted file as a deck."""
deck = Deck()
filename = file_utils.assets_folder / "test_input_deck_1_1024bit.asc"
+ deck.append(kwd.Node())
deck.append(kwd.Include(filename=filename))
deck = deck.expand()
_verify_encrypted_deck(deck)
+ assert len(deck.get(type="NODE")) == 1
@pytest.mark.keywords
def test_deck_remove_superfluous_newlines(ref_string):
| null | null |
2025-02-17T15:25:37Z
|
2025-02-17T15:53:52Z
|
0.4.6
|
[] |
[
"tests/test_deck.py::test_deck_long_deck_write_standard",
"tests/test_deck.py::test_control_debug",
"tests/test_deck.py::test_deck_long_deck_write_noargs",
"tests/test_deck.py::test_deck_expand_transform",
"tests/test_deck.py::test_deck_load_title",
"tests/test_deck.py::test_deck_expand_recursive_include_path",
"tests/test_deck.py::test_deck_remove_superfluous_newlines",
"tests/test_deck.py::test_deck_kwlist",
"tests/test_deck.py::test_deck_read_parameters",
"tests/test_deck.py::test_deck_005",
"tests/test_deck.py::test_kwdeck_basic_002",
"tests/test_deck.py::test_deck_expand",
"tests/test_deck.py::test_deck_encrypted_import",
"tests/test_deck.py::test_deck_expand_transform_custom_handler",
"tests/test_deck.py::test_deck_read_long_deck_standard_keyword",
"tests/test_deck.py::test_kwdeck_basic_001",
"tests/test_deck.py::test_deck_007",
"tests/test_deck.py::test_deck_read_long_deck_long_keyword",
"tests/test_deck.py::test_deck_002",
"tests/test_deck.py::test_deck_long_deck_write_standard_keyword",
"tests/test_deck.py::test_deck_remove",
"tests/test_deck.py::test_deck_default_write_long",
"tests/test_deck.py::test_variable_card_read_write_set",
"tests/test_deck.py::test_deck_004",
"tests/test_deck.py::test_deck_copy",
"tests/test_deck.py::test_deck_006",
"tests/test_deck.py::test_deck_001",
"tests/test_deck.py::test_deck_with_contacts_and_ids",
"tests/test_deck.py::test_deck_encrypted_import_expand",
"tests/test_deck.py::test_deck_003",
"tests/test_deck.py::test_deck_unprocessed"
] | null |
pip install -e .;
pip install pytest pytest-json-report;
|
pytest --json-report --json-report-file=report_pytest.json
|
python:3.11
|
{
"issue_number": 719,
"merge_commit": "2f1cadab738e6e1ba35df7cdf9170ebe71d8eb2b",
"merged_at": "2025-02-17T15:53:51Z",
"pr_number": 720,
"score": {
"complexity": -1,
"task_correctness": -1,
"test_completeness": -1,
"test_correctness": -1
},
"timeout_build": 3000,
"timeout_test": 600,
"type": "python pytest",
"url": {
"diff": "https://patch-diff.githubusercontent.com/raw/0 ansys/pydyna\n1 DataBiosphere/toil\n2 rapidsai/dependency-file-generator\n3 gradio-app/gradio\n4 getmoto/moto\n ... \n1407 yt-dlp/yt-dlp\n1408 CPMpy/cpmpy\n1409 awslabs/amazon-transcribe-streaming-sdk\n1410 jlowin/fastmcp\n1411 isaacharrisholt/quiffen\nName: repo, Length: 1412, dtype: object/pull/720.diff",
"issue": "https://github.com/0 ansys/pydyna\n1 DataBiosphere/toil\n2 rapidsai/dependency-file-generator\n3 gradio-app/gradio\n4 getmoto/moto\n ... \n1407 yt-dlp/yt-dlp\n1408 CPMpy/cpmpy\n1409 awslabs/amazon-transcribe-streaming-sdk\n1410 jlowin/fastmcp\n1411 isaacharrisholt/quiffen\nName: repo, Length: 1412, dtype: object/issues/719",
"pr": "https://github.com/0 ansys/pydyna\n1 DataBiosphere/toil\n2 rapidsai/dependency-file-generator\n3 gradio-app/gradio\n4 getmoto/moto\n ... \n1407 yt-dlp/yt-dlp\n1408 CPMpy/cpmpy\n1409 awslabs/amazon-transcribe-streaming-sdk\n1410 jlowin/fastmcp\n1411 isaacharrisholt/quiffen\nName: repo, Length: 1412, dtype: object/pull/720"
}
}
|
DataBiosphere/toil
|
DataBiosphere__toil-5221
|
88d656ce0cec4e5e6b215596c47218321fd9cb7a
|
diff --git a/src/toil/batchSystems/slurm.py b/src/toil/batchSystems/slurm.py
index 3ddddff6..df6ebc9b 100644
--- a/src/toil/batchSystems/slurm.py
+++ b/src/toil/batchSystems/slurm.py
@@ -13,6 +13,7 @@
# limitations under the License.
from __future__ import annotations
+import errno
import logging
import math
import os
@@ -185,6 +186,8 @@ class SlurmBatchSystem(AbstractGridEngineBatchSystem):
def get_partition(self, time_limit: float | None) -> str | None:
"""
Get the partition name to use for a job with the given time limit.
+
+ :param time_limit: Time limit in seconds.
"""
if time_limit is None:
@@ -193,17 +196,36 @@ class SlurmBatchSystem(AbstractGridEngineBatchSystem):
winning_partition = None
for partition in self.all_partitions:
- if partition.time_limit >= time_limit and (
- winning_partition is None
- or partition.time_limit < winning_partition.time_limit
- ):
- # If this partition can fit the job and is faster than the current winner, take it
+ if partition.time_limit < time_limit:
+ # Can't use this
+ continue
+ if winning_partition is None:
+ # Anything beats None
+ winning_partition = partition
+ continue
+ if partition.gres and not winning_partition.gres:
+ # Never use a partition witn GRES if you can avoid it
+ continue
+ elif not partition.gres and winning_partition.gres:
+ # Never keep a partition with GRES if we find one without
+ winning_partition = partition
+ continue
+ if partition.priority > winning_partition.priority:
+ # After that, don't raise priority
+ continue
+ elif partition.priority < winning_partition.priority:
+ # And always lower it
+ winning_partition = partition
+ continue
+ if partition.time_limit < winning_partition.time_limit:
+ # Finally, lower time limit
winning_partition = partition
+
# TODO: Store partitions in a better indexed way
if winning_partition is None and len(self.all_partitions) > 0:
# We have partitions and none of them can fit this
raise RuntimeError(
- "Could not find a Slurm partition that can fit a job that runs for {time_limit} seconds"
+ f"Could not find a Slurm partition that can fit a job that runs for {time_limit} seconds"
)
if winning_partition is None:
@@ -344,7 +366,9 @@ class SlurmBatchSystem(AbstractGridEngineBatchSystem):
"""
try:
status_dict = self._getJobDetailsFromSacct(job_id_list)
- except CalledProcessErrorStderr:
+ except (CalledProcessErrorStderr, OSError) as e:
+ if isinstance(e, OSError):
+ logger.warning("Could not run sacct: %s", e)
status_dict = self._getJobDetailsFromScontrol(job_id_list)
return status_dict
@@ -437,11 +461,25 @@ class SlurmBatchSystem(AbstractGridEngineBatchSystem):
"-S",
"1970-01-01",
] # override start time limit
- stdout = call_command(args, quiet=True)
# Collect the job statuses in a dict; key is the job-id, value is a tuple containing
# job state and exit status. Initialize dict before processing output of `sacct`.
job_statuses: dict[int, tuple[str | None, int | None]] = {}
+
+ try:
+ stdout = call_command(args, quiet=True)
+ except OSError as e:
+ if e.errno == errno.E2BIG:
+ # Argument list is too big, recurse on half the argument list
+ if len(job_id_list) == 1:
+ # 1 is too big, we can't recurse further, bail out
+ raise
+ job_statuses.update(self._getJobDetailsFromSacct(job_id_list[:len(job_id_list)//2]))
+ job_statuses.update(self._getJobDetailsFromSacct(job_id_list[len(job_id_list)//2:]))
+ return job_statuses
+ else:
+ raise
+
for job_id in job_id_list:
job_statuses[job_id] = (None, None)
@@ -649,6 +687,9 @@ class SlurmBatchSystem(AbstractGridEngineBatchSystem):
if cpu is not None:
sbatch_line.append(f"--cpus-per-task={math.ceil(cpu)}")
+ if any(option.startswith("--time=") or option == "--time" or option == "-t" for option in sbatch_line):
+ raise RuntimeError("Support for manual --time option has been replaced by Toil --slurmTime")
+
time_limit: int = self.boss.config.slurm_time # type: ignore[attr-defined]
if time_limit is not None:
# Put all the seconds in the seconds slot
|
diff --git a/src/toil/test/batchSystems/test_slurm.py b/src/toil/test/batchSystems/test_slurm.py
index b2818359..2c6aaeb4 100644
--- a/src/toil/test/batchSystems/test_slurm.py
+++ b/src/toil/test/batchSystems/test_slurm.py
@@ -1,3 +1,4 @@
+import errno
import textwrap
from queue import Queue
@@ -29,6 +30,9 @@ def call_sacct(args, **_) -> str:
1236|FAILED|0:2
1236.extern|COMPLETED|0:0
"""
+ if sum(len(a) for a in args) > 1000:
+ # Simulate if the argument list is too long
+ raise OSError(errno.E2BIG, "Argument list is too long")
# Fake output per fake job-id.
sacct_info = {
609663: "609663|FAILED|0:2\n609663.extern|COMPLETED|0:0\n",
@@ -173,6 +177,26 @@ def call_sacct_raises(*_):
1, "sacct: error: Problem talking to the database: " "Connection timed out"
)
+def call_sinfo(*_) -> str:
+ """
+ Simulate asking for partition info from Slurm
+ """
+ stdout = textwrap.dedent(
+ """\
+ PARTITION GRES TIMELIMIT PRIO_TIER CPUS MEMORY
+ short* (null) 1:00:00 500 256+ 1996800+
+ medium (null) 12:00:00 500 256+ 1996800+
+ long (null) 14-00:00:00 500 256+ 1996800+
+ gpu gpu:A100:8 7-00:00:00 5000 256 996800
+ gpu gpu:A5500:8 7-00:00:00 5000 256 1996800
+ high_priority gpu:A5500:8 7-00:00:00 65000 256 1996800
+ high_priority (null) 7-00:00:00 65000 256+ 1996800+
+ simple_nodelist gpu:A100:8 1:00 65000 256 996800
+ simple_nodelist gpu:A5500:8 1:00 65000 256 1996800
+ simple_nodelist (null) 1:00 65000 256+ 1996800+
+ """
+ )
+ return stdout
class FakeBatchSystem:
"""
@@ -262,6 +286,13 @@ class SlurmTest(ToilTest):
result = self.worker._getJobDetailsFromSacct(list(expected_result))
assert result == expected_result, f"{result} != {expected_result}"
+ def test_getJobDetailsFromSacct_argument_list_too_big(self):
+ self.monkeypatch.setattr(toil.batchSystems.slurm, "call_command", call_sacct)
+ expected_result = {i: (None, None) for i in range(2000)}
+ result = self.worker._getJobDetailsFromSacct(list(expected_result))
+ assert result == expected_result, f"{result} != {expected_result}"
+
+
####
#### tests for _getJobDetailsFromScontrol()
####
@@ -449,3 +480,34 @@ class SlurmTest(ToilTest):
pass
else:
assert False, "Exception CalledProcessErrorStderr not raised"
+
+ ###
+ ### Tests for partition selection
+ ##
+
+ def test_PartitionSet_get_partition(self):
+ self.monkeypatch.setattr(toil.batchSystems.slurm, "call_command", call_sinfo)
+ ps = toil.batchSystems.slurm.SlurmBatchSystem.PartitionSet()
+
+ # At zero. short will win because simple_nodelist has higher priority.
+ self.assertEqual(ps.get_partition(0), "short")
+ # Easily within the partition
+ self.assertEqual(ps.get_partition(10 * 60), "short")
+ # Exactly on the boundary
+ self.assertEqual(ps.get_partition(60 * 60), "short")
+ # Well within the next partition
+ self.assertEqual(ps.get_partition(2 * 60 * 60), "medium")
+ # Can only fit in long
+ self.assertEqual(ps.get_partition(8 * 24 * 60 * 60), "long")
+ # Could fit in gpu or long
+ self.assertEqual(ps.get_partition(6 * 24 * 60 * 60), "long")
+ # Can't fit in anything
+ with self.assertRaises(Exception):
+ ps.get_partition(365 * 24 * 60 * 60)
+
+ def test_PartitionSet_default_gpu_partition(self):
+ self.monkeypatch.setattr(toil.batchSystems.slurm, "call_command", call_sinfo)
+ ps = toil.batchSystems.slurm.SlurmBatchSystem.PartitionSet()
+
+ # Make sure we picked the useful-length GPU partition and not the super short one.
+ self.assertEqual(ps.default_gpu_partition.partition_name, "gpu")
| null | null |
2024-08-19T13:45:15Z
|
2025-03-04T18:26:32Z
|
0.4.6
|
[] |
[
"src/toil/test/batchSystems/test_slurm.py::SlurmTest::test_PartitionSet_get_partition",
"src/toil/test/batchSystems/test_slurm.py::SlurmTest::test_getJobDetailsFromSacct_one_not_exists",
"src/toil/test/batchSystems/test_slurm.py::SlurmTest::test_getJobExitCode_job_not_exists",
"src/toil/test/batchSystems/test_slurm.py::SlurmTest::test_getJobDetailsFromSacct_many_none_exist",
"src/toil/test/batchSystems/test_slurm.py::SlurmTest::test_coalesce_job_exit_codes_one_exists",
"src/toil/test/batchSystems/test_slurm.py::SlurmTest::test_getJobDetailsFromSacct_argument_list_too_big",
"src/toil/test/batchSystems/test_slurm.py::SlurmTest::test_coalesce_job_exit_codes_sacct_raises_job_exists",
"src/toil/test/batchSystems/test_slurm.py::SlurmTest::test_getJobDetailsFromScontrol_one_not_exists",
"src/toil/test/batchSystems/test_slurm.py::SlurmTest::test_coalesce_job_exit_codes_many_all_exist",
"src/toil/test/batchSystems/test_slurm.py::SlurmTest::test_coalesce_job_exit_codes_some_exists",
"src/toil/test/batchSystems/test_slurm.py::SlurmTest::test_coalesce_job_exit_codes_one_not_exists",
"src/toil/test/batchSystems/test_slurm.py::SlurmTest::test_coalesce_job_exit_codes_sacct_raises_job_not_exists",
"src/toil/test/batchSystems/test_slurm.py::SlurmTest::test_getJobDetailsFromSacct_many_some_exist",
"src/toil/test/batchSystems/test_slurm.py::SlurmTest::test_getJobExitCode_sacct_raises_job_exists",
"src/toil/test/batchSystems/test_slurm.py::SlurmTest::test_getJobDetailsFromSacct_one_exists",
"src/toil/test/batchSystems/test_slurm.py::SlurmTest::test_getJobDetailsFromScontrol_many_some_exist",
"src/toil/test/batchSystems/test_slurm.py::SlurmTest::test_PartitionSet_default_gpu_partition",
"src/toil/test/batchSystems/test_slurm.py::SlurmTest::test_getJobDetailsFromScontrol_many_none_exist",
"src/toil/test/batchSystems/test_slurm.py::SlurmTest::test_getJobDetailsFromScontrol_one_exists",
"src/toil/test/batchSystems/test_slurm.py::SlurmTest::test_getJobExitCode_job_exists",
"src/toil/test/batchSystems/test_slurm.py::SlurmTest::test_getJobDetailsFromScontrol_many_all_exist",
"src/toil/test/batchSystems/test_slurm.py::SlurmTest::test_getJobDetailsFromSacct_many_all_exist",
"src/toil/test/batchSystems/test_slurm.py::SlurmTest::test_getJobExitCode_sacct_raises_job_not_exists"
] | null |
pip install -e .;
pip install pytest pytest-json-report;
|
pytest --json-report --json-report-file=report_pytest.json
|
python:3.11
|
{
"issue_number": 5064,
"merge_commit": "deb243bd78f49519189bcec75df0bad9b983e7dc",
"merged_at": "2025-03-04T18:26:30Z",
"pr_number": 5221,
"score": {
"complexity": -1,
"task_correctness": -1,
"test_completeness": -1,
"test_correctness": -1
},
"timeout_build": 3000,
"timeout_test": 600,
"type": "python pytest",
"url": {
"diff": "https://patch-diff.githubusercontent.com/raw/0 ansys/pydyna\n1 DataBiosphere/toil\n2 rapidsai/dependency-file-generator\n3 gradio-app/gradio\n4 getmoto/moto\n ... \n1407 yt-dlp/yt-dlp\n1408 CPMpy/cpmpy\n1409 awslabs/amazon-transcribe-streaming-sdk\n1410 jlowin/fastmcp\n1411 isaacharrisholt/quiffen\nName: repo, Length: 1412, dtype: object/pull/5221.diff",
"issue": "https://github.com/0 ansys/pydyna\n1 DataBiosphere/toil\n2 rapidsai/dependency-file-generator\n3 gradio-app/gradio\n4 getmoto/moto\n ... \n1407 yt-dlp/yt-dlp\n1408 CPMpy/cpmpy\n1409 awslabs/amazon-transcribe-streaming-sdk\n1410 jlowin/fastmcp\n1411 isaacharrisholt/quiffen\nName: repo, Length: 1412, dtype: object/issues/5064",
"pr": "https://github.com/0 ansys/pydyna\n1 DataBiosphere/toil\n2 rapidsai/dependency-file-generator\n3 gradio-app/gradio\n4 getmoto/moto\n ... \n1407 yt-dlp/yt-dlp\n1408 CPMpy/cpmpy\n1409 awslabs/amazon-transcribe-streaming-sdk\n1410 jlowin/fastmcp\n1411 isaacharrisholt/quiffen\nName: repo, Length: 1412, dtype: object/pull/5221"
}
}
|
YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/datasets-cards)
README.md exists but content is empty.
- Downloads last month
- 4