eachanjohnson's picture
Drop sparse columns (#7)
abd62f7 verified
+ DATA=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/spark-mic_cleaned-2510.csv
+ DATA_OUTPUT_DIR=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2
+ FEATURE=smiles
+ LABEL=pmic
+ python -m venv /nemo/lab/johnsone/home/users/johnsoe/data/datasets/.schemist
+ /nemo/lab/johnsone/home/users/johnsoe/data/datasets/.schemist/bin/pip install 'schemist>=0.0.4.post1' pandas
Requirement already satisfied: schemist>=0.0.4.post1 in /nemo/lab/johnsone/home/users/johnsoe/data/datasets/.schemist/lib/python3.10/site-packages (0.0.4.post1)
Requirement already satisfied: pandas in /nemo/lab/johnsone/home/users/johnsoe/data/datasets/.schemist/lib/python3.10/site-packages (2.2.3)
Requirement already satisfied: requests in /nemo/lab/johnsone/home/users/johnsoe/data/datasets/.schemist/lib/python3.10/site-packages (from schemist>=0.0.4.post1) (2.32.3)
Requirement already satisfied: descriptastorus>=2.7 in /nemo/lab/johnsone/home/users/johnsoe/data/datasets/.schemist/lib/python3.10/site-packages (from schemist>=0.0.4.post1) (2.8.0)
Requirement already satisfied: selfies in /nemo/lab/johnsone/home/users/johnsoe/data/datasets/.schemist/lib/python3.10/site-packages (from schemist>=0.0.4.post1) (2.2.0)
Requirement already satisfied: openpyxl==3.1.0 in /nemo/lab/johnsone/home/users/johnsoe/data/datasets/.schemist/lib/python3.10/site-packages (from schemist>=0.0.4.post1) (3.1.0)
Requirement already satisfied: nemony in /nemo/lab/johnsone/home/users/johnsoe/data/datasets/.schemist/lib/python3.10/site-packages (from schemist>=0.0.4.post1) (0.0.2)
Requirement already satisfied: rdkit>=2022.09.5 in /nemo/lab/johnsone/home/users/johnsoe/data/datasets/.schemist/lib/python3.10/site-packages (from schemist>=0.0.4.post1) (2024.9.6)
Requirement already satisfied: carabiner-tools[pd]>=0.0.4 in /nemo/lab/johnsone/home/users/johnsoe/data/datasets/.schemist/lib/python3.10/site-packages (from schemist>=0.0.4.post1) (0.0.4)
Requirement already satisfied: et-xmlfile in /nemo/lab/johnsone/home/users/johnsoe/data/datasets/.schemist/lib/python3.10/site-packages (from openpyxl==3.1.0->schemist>=0.0.4.post1) (2.0.0)
Requirement already satisfied: tzdata>=2022.7 in /nemo/lab/johnsone/home/users/johnsoe/data/datasets/.schemist/lib/python3.10/site-packages (from pandas) (2025.2)
Requirement already satisfied: pytz>=2020.1 in /nemo/lab/johnsone/home/users/johnsoe/data/datasets/.schemist/lib/python3.10/site-packages (from pandas) (2025.2)
Requirement already satisfied: python-dateutil>=2.8.2 in /nemo/lab/johnsone/home/users/johnsoe/data/datasets/.schemist/lib/python3.10/site-packages (from pandas) (2.9.0.post0)
Requirement already satisfied: numpy>=1.22.4 in /nemo/lab/johnsone/home/users/johnsoe/data/datasets/.schemist/lib/python3.10/site-packages (from pandas) (2.2.4)
Requirement already satisfied: tqdm in /nemo/lab/johnsone/home/users/johnsoe/data/datasets/.schemist/lib/python3.10/site-packages (from carabiner-tools[pd]>=0.0.4->schemist>=0.0.4.post1) (4.67.1)
Requirement already satisfied: pandas-flavor in /nemo/lab/johnsone/home/users/johnsoe/data/datasets/.schemist/lib/python3.10/site-packages (from descriptastorus>=2.7->schemist>=0.0.4.post1) (0.6.0)
Requirement already satisfied: scipy in /nemo/lab/johnsone/home/users/johnsoe/data/datasets/.schemist/lib/python3.10/site-packages (from descriptastorus>=2.7->schemist>=0.0.4.post1) (1.15.2)
Requirement already satisfied: six>=1.5 in /nemo/lab/johnsone/home/users/johnsoe/data/datasets/.schemist/lib/python3.10/site-packages (from python-dateutil>=2.8.2->pandas) (1.17.0)
Requirement already satisfied: Pillow in /nemo/lab/johnsone/home/users/johnsoe/data/datasets/.schemist/lib/python3.10/site-packages (from rdkit>=2022.09.5->schemist>=0.0.4.post1) (11.1.0)
Requirement already satisfied: pyyaml in /nemo/lab/johnsone/home/users/johnsoe/data/datasets/.schemist/lib/python3.10/site-packages (from nemony->schemist>=0.0.4.post1) (6.0.2)
Requirement already satisfied: urllib3<3,>=1.21.1 in /nemo/lab/johnsone/home/users/johnsoe/data/datasets/.schemist/lib/python3.10/site-packages (from requests->schemist>=0.0.4.post1) (2.3.0)
Requirement already satisfied: idna<4,>=2.5 in /nemo/lab/johnsone/home/users/johnsoe/data/datasets/.schemist/lib/python3.10/site-packages (from requests->schemist>=0.0.4.post1) (3.10)
Requirement already satisfied: charset-normalizer<4,>=2 in /nemo/lab/johnsone/home/users/johnsoe/data/datasets/.schemist/lib/python3.10/site-packages (from requests->schemist>=0.0.4.post1) (3.4.1)
Requirement already satisfied: certifi>=2017.4.17 in /nemo/lab/johnsone/home/users/johnsoe/data/datasets/.schemist/lib/python3.10/site-packages (from requests->schemist>=0.0.4.post1) (2025.1.31)
Requirement already satisfied: xarray in /nemo/lab/johnsone/home/users/johnsoe/data/datasets/.schemist/lib/python3.10/site-packages (from pandas-flavor->descriptastorus>=2.7->schemist>=0.0.4.post1) (2025.3.0)
Requirement already satisfied: packaging>=23.2 in /nemo/lab/johnsone/home/users/johnsoe/data/datasets/.schemist/lib/python3.10/site-packages (from xarray->pandas-flavor->descriptastorus>=2.7->schemist>=0.0.4.post1) (24.2)
[notice] A new release of pip is available: 23.0.1 -> 25.2
[notice] To update, run: python -m pip install --upgrade pip
+ source /nemo/lab/johnsone/home/users/johnsoe/data/datasets/.schemist/bin/activate
++ deactivate nondestructive
++ '[' -n '' ']'
++ '[' -n '' ']'
++ '[' -n /usr/bin/bash -o -n '' ']'
++ hash -r
++ '[' -n '' ']'
++ unset VIRTUAL_ENV
++ unset VIRTUAL_ENV_PROMPT
++ '[' '!' nondestructive = nondestructive ']'
++ VIRTUAL_ENV=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/.schemist
++ export VIRTUAL_ENV
++ _OLD_VIRTUAL_PATH=/camp/home/johnsoe/.conda/envs/dev/bin:/camp/apps/eb/software/Anaconda3/2023.09-0/condabin:/usr/share/Modules/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/lpp/mmfs/bin:/camp/home/johnsoe/.local/bin:/camp/home/johnsoe/bin
++ PATH=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/.schemist/bin:/camp/home/johnsoe/.conda/envs/dev/bin:/camp/apps/eb/software/Anaconda3/2023.09-0/condabin:/usr/share/Modules/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/lpp/mmfs/bin:/camp/home/johnsoe/.local/bin:/camp/home/johnsoe/bin
++ export PATH
++ '[' -n '' ']'
++ '[' -z '' ']'
++ _OLD_VIRTUAL_PS1=
++ PS1='(.schemist) '
++ export PS1
++ VIRTUAL_ENV_PROMPT='(.schemist) '
++ export VIRTUAL_ENV_PROMPT
++ '[' -n /usr/bin/bash -o -n '' ']'
++ hash -r
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2
+ wt_data=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/spark-all.csv
+ sed '1s/^\xEF\xBB\xBF//' /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/spark-mic_cleaned-2510.csv
+ pandas '.drop(columns=[
"spark_mwt",
"compound_cas_no",
"approval_date",
"publication_lab",
"is_buffered",
"molecule_chembl_id",
"molecule_chembl_source",
"pubchem_link",
"data_source_info",
"data_source_id",
"external_compound_id",
"strain_notes",
"strain_phenotype",
"incubation_conditions",
"spark_extractor_notes",
"solvent_percent",
])'
+ pandas '.query("not spark_SMILES.isna()")'
+ pandas '.query("not pmic.isna()")'
+ pandas '.query("not species.isna()")'
+ local 'cmd=.query("not spark_SMILES.isna()")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ local 'cmd=.drop(columns=[
"spark_mwt",
"compound_cas_no",
"approval_date",
"publication_lab",
"is_buffered",
"molecule_chembl_id",
"molecule_chembl_source",
"pubchem_link",
"data_source_info",
"data_source_id",
"external_compound_id",
"strain_notes",
"strain_phenotype",
"incubation_conditions",
"spark_extractor_notes",
"solvent_percent",
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("not spark_SMILES.isna()").to_csv(sys.stdout, index=False, sep=",")'
])'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).drop(columns=[
"spark_mwt",
"compound_cas_no",
"approval_date",
"publication_lab",
"is_buffered",
"molecule_chembl_id",
"molecule_chembl_source",
"pubchem_link",
"data_source_info",
"data_source_id",
"external_compound_id",
"strain_notes",
"strain_phenotype",
"incubation_conditions",
"spark_extractor_notes",
"solvent_percent",
]).to_csv(sys.stdout, index=False, sep=",")'
+ pandas '; val = "#NUM!"; df.query("spark_SMILES != @val")'
+ pandas '; val = "#NUM!"; df.query("pmic != @val")'
+ local 'cmd=.query("not pmic.isna()")'
+ local sep1=,
+ local 'cmd=.query("not species.isna()")'
+ local idx=False
+ local sep2=,
+ local sep1=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("not pmic.isna()").to_csv(sys.stdout, index=False, sep=",")'
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("not species.isna()").to_csv(sys.stdout, index=False, sep=",")'
+ schemist convert -c spark_SMILES -2 id smiles inchikey scaffold mwt clogp tpsa -f CSV -x prefix=SCB-
+ pandas '; import numpy as np; df.assign(
species=lambda x: np.where(x["species"].isna() & x["strain_name"].str.startswith("PAO1-"), "Pseudomonas aeruginosa", x["species"]),
strain_genotype2=lambda x: x["strain_genotype"].fillna("WT"),
full_strain_name=lambda x: x["species"].str.cat(x["strain_name"].fillna(""), sep=" ").str.rstrip(),
full_strain_name_with_genotype=lambda x: x["full_strain_name"].str.cat(x["strain_genotype"].fillna(""), sep=" ").str.rstrip(),
strain_genotype=lambda x: x["strain_genotype2"],
).drop(columns="strain_genotype2")'
+ local 'cmd=; val = "#NUM!"; df.query("pmic != @val")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); val = "#NUM!"; df.query("pmic != @val").to_csv(sys.stdout, index=False, sep=",")'
+ pandas '.query("not smiles.isna() and not species.isna()")'
+ local 'cmd=; import numpy as np; df.assign(
species=lambda x: np.where(x["species"].isna() & x["strain_name"].str.startswith("PAO1-"), "Pseudomonas aeruginosa", x["species"]),
strain_genotype2=lambda x: x["strain_genotype"].fillna("WT"),
full_strain_name=lambda x: x["species"].str.cat(x["strain_name"].fillna(""), sep=" ").str.rstrip(),
full_strain_name_with_genotype=lambda x: x["full_strain_name"].str.cat(x["strain_genotype"].fillna(""), sep=" ").str.rstrip(),
strain_genotype=lambda x: x["strain_genotype2"],
).drop(columns="strain_genotype2")'
+ local 'cmd=; val = "#NUM!"; df.query("spark_SMILES != @val")'
+ local sep1=,
+ local sep1=,
+ local idx=False
+ local idx=False
+ local sep2=,
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); val = "#NUM!"; df.query("spark_SMILES != @val").to_csv(sys.stdout, index=False, sep=",")'
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); import numpy as np; df.assign(
species=lambda x: np.where(x["species"].isna() & x["strain_name"].str.startswith("PAO1-"), "Pseudomonas aeruginosa", x["species"]),
strain_genotype2=lambda x: x["strain_genotype"].fillna("WT"),
full_strain_name=lambda x: x["species"].str.cat(x["strain_name"].fillna(""), sep=" ").str.rstrip(),
full_strain_name_with_genotype=lambda x: x["full_strain_name"].str.cat(x["strain_genotype"].fillna(""), sep=" ").str.rstrip(),
strain_genotype=lambda x: x["strain_genotype2"],
).drop(columns="strain_genotype2").to_csv(sys.stdout, index=False, sep=",")'
+ schemist split -f CSV --type scaffold --train 0.7 --test 0.15 --seed 0 --output /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/spark-all.csv
+ local 'cmd=.query("not smiles.isna() and not species.isna()")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("not smiles.isna() and not species.isna()").to_csv(sys.stdout, index=False, sep=",")'
🚀 Converting between string representations with the following parameters:
subcommand: convert
output: <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>
format: CSV
input: <_io.TextIOWrapper name='<stdin>' mode='r' encoding='utf-8'>
representation: SMILES
column: spark_SMILES
prefix: None
to: ['id', 'smiles', 'inchikey', 'scaffold', 'mwt', 'clogp', 'tpsa']
options: ['prefix=SCB-']
func: <function _convert at 0x7f451acf5fc0>
🚀 Splitting table with the following parameters:
subcommand: split
output: <_io.TextIOWrapper name='/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/spark-all.csv' mode='w' encoding='UTF-8'>
format: CSV
input: <_io.TextIOWrapper name='<stdin>' mode='r' encoding='utf-8'>
representation: SMILES
column: smiles
prefix: None
type: scaffold
train: 0.7
test: 0.15
seed: 0
func: <function _split at 0x7fb21ce42290>
0it [00:00, ?it/s] 1it [00:01, 1.39s/it] 2it [00:03, 1.74s/it] 3it [00:05, 1.75s/it] 4it [00:06, 1.76s/it] 5it [00:08, 1.67s/it] 6it [00:09, 1.52s/it] 7it [00:10, 1.35s/it] 8it [00:11, 1.26s/it] 9it [00:12, 1.22s/it] 10it [00:13, 1.17s/it] 11it [00:15, 1.17s/it] 12it [00:16, 1.17s/it] 13it [00:17, 1.12s/it] 14it [00:18, 1.09s/it] 15it [00:19, 1.08s/it] 16it [00:20, 1.05s/it] 17it [00:21, 1.08s/it] 18it [00:22, 1.06s/it] 19it [00:23, 1.05s/it] 20it [00:24, 1.04s/it] 21it [00:25, 1.06s/it] 22it [00:26, 1.04s/it] 23it [00:27, 1.05s/it] 24it [00:28, 1.06s/it] 25it [00:29, 1.06s/it] 26it [00:30, 1.06s/it] 27it [00:31, 1.07s/it] 28it [00:33, 1.08s/it] 29it [00:34, 1.08s/it] 30it [00:35, 1.07s/it] 31it [00:36, 1.09s/it] 32it [00:37, 1.10s/it] 33it [00:38, 1.09s/it] 34it [00:39, 1.09s/it] 35it [00:40, 1.09s/it] 36it [00:41, 1.10s/it] 37it [00:42, 1.09s/it] 38it [00:44, 1.10s/it] 39it [00:45, 1.10s/it] 40it [00:46, 1.09s/it] 41it [00:47, 1.10s/it] 42it [00:48, 1.10s/it] 43it [00:49, 1.09s/it] 44it [00:50, 1.05s/it] 45it [00:51, 1.08s/it] 46it [00:52, 1.13s/it] 47it [00:54, 1.15s/it] 48it [00:55, 1.16s/it] 49it [00:56, 1.18s/it] 50it [00:57, 1.23s/it] 51it [00:59, 1.42s/it] 52it [01:01, 1.58s/it] 53it [01:02, 1.51s/it] 54it [01:04, 1.45s/it] 55it [01:05, 1.39s/it] 56it [01:06, 1.37s/it] 57it [01:08, 1.31s/it] 58it [01:09, 1.28s/it] 59it [01:10, 1.29s/it] 60it [01:11, 1.26s/it] 61it [01:12, 1.25s/it] 62it [01:14, 1.27s/it] 63it [01:15, 1.27s/it] 64it [01:16, 1.30s/it] 65it [01:18, 1.35s/it] 66it [01:19, 1.29s/it] 67it [01:20, 1.28s/it] 68it [01:22, 1.28s/it] 69it [01:23, 1.27s/it] 70it [01:24, 1.30s/it] 71it [01:26, 1.36s/it] 72it [01:27, 1.31s/it] 73it [01:28, 1.28s/it] 74it [01:29, 1.27s/it] 75it [01:31, 1.28s/it] 76it [01:32, 1.33s/it] 77it [01:33, 1.31s/it] 78it [01:35, 1.30s/it] 79it [01:36, 1.28s/it] 80it [01:37, 1.26s/it] 81it [01:38, 1.25s/it] 82it [01:40, 1.30s/it] 83it [01:41, 1.31s/it] 84it [01:42, 1.26s/it] 85it [01:43, 1.27s/it] 86it [01:45, 1.28s/it] 87it [01:46, 1.33s/it] 88it [01:48, 1.32s/it] 89it [01:49, 1.29s/it] 90it [01:50, 1.29s/it] 91it [01:51, 1.28s/it] 92it [01:53, 1.31s/it] 93it [01:54, 1.38s/it] 94it [01:55, 1.32s/it] 95it [01:57, 1.30s/it] 96it [01:58, 1.31s/it] 97it [01:59, 1.31s/it] 98it [02:01, 1.34s/it] 99it [02:02, 1.37s/it] 100it [02:03, 1.33s/it] 101it [02:05, 1.33s/it] 102it [02:06, 1.33s/it] 103it [02:07, 1.32s/it] 104it [02:09, 1.37s/it] 105it [02:10, 1.35s/it] 106it [02:11, 1.32s/it] 107it [02:13, 1.33s/it] 108it [02:14, 1.32s/it] 109it [02:16, 1.47s/it] 110it [02:17, 1.47s/it] 111it [02:20, 1.74s/it] 112it [02:22, 1.79s/it] 113it [02:23, 1.78s/it] 114it [02:25, 1.69s/it] 115it [02:26, 1.65s/it] 116it [02:29, 1.80s/it] 117it [02:31, 1.99s/it] 118it [02:34, 2.34s/it] 119it [02:36, 2.10s/it] 120it [02:38, 2.02s/it] 121it [02:39, 1.83s/it] 122it [02:40, 1.73s/it] 123it [02:42, 1.59s/it] 124it [02:43, 1.49s/it] 125it [02:44, 1.50s/it] 126it [02:46, 1.45s/it] 127it [02:47, 1.34s/it] 128it [02:48, 1.30s/it] 129it [02:49, 1.27s/it] 130it [02:50, 1.06s/it] 130it [02:50, 1.31s/it]
Error counts:
id: 0
smiles: 0
inchikey: 0
scaffold: 4
mwt: 0
clogp: 0
tpsa: 0
⏰ Completed process in 0:02:52.094082
0it [00:00, ?it/s] 1it [00:00, 1.44it/s] 2it [00:01, 1.09it/s] 3it [00:02, 1.09it/s] 4it [00:03, 1.08it/s] 5it [00:04, 1.14it/s] 6it [00:04, 1.31it/s] 7it [00:05, 1.48it/s] 8it [00:06, 1.57it/s] 9it [00:06, 1.63it/s] 10it [00:07, 1.70it/s] 11it [00:07, 1.68it/s] 12it [00:08, 1.67it/s] 13it [00:08, 1.76it/s] 14it [00:09, 1.84it/s] 15it [00:09, 1.88it/s] 16it [00:10, 1.94it/s] 17it [00:10, 1.89it/s] 18it [00:11, 1.91it/s] 19it [00:11, 1.94it/s] 20it [00:12, 1.96it/s] 21it [00:12, 1.94it/s] 22it [00:13, 1.98it/s] 23it [00:13, 1.93it/s] 24it [00:14, 1.87it/s] 25it [00:15, 1.88it/s] 26it [00:15, 1.91it/s] 27it [00:16, 1.91it/s] 28it [00:16, 1.89it/s] 29it [00:17, 1.90it/s] 30it [00:17, 1.92it/s] 31it [00:18, 1.89it/s] 32it [00:18, 1.88it/s] 33it [00:19, 1.90it/s] 34it [00:19, 1.90it/s] 35it [00:20, 1.89it/s] 36it [00:20, 1.89it/s] 37it [00:21, 1.91it/s] 38it [00:21, 1.91it/s] 39it [00:22, 1.85it/s] 40it [00:22, 1.88it/s] 41it [00:23, 1.88it/s] 42it [00:23, 1.92it/s] 43it [00:24, 1.91it/s] 44it [00:24, 2.01it/s] 45it [00:25, 1.93it/s] 46it [00:26, 1.84it/s] 47it [00:26, 1.80it/s] 48it [00:27, 1.78it/s] 49it [00:27, 1.75it/s] 50it [00:28, 1.65it/s] 51it [00:29, 1.35it/s] 52it [00:30, 1.20it/s] 53it [00:31, 1.27it/s] 54it [00:31, 1.33it/s] 55it [00:32, 1.39it/s] 56it [00:33, 1.41it/s] 57it [00:33, 1.49it/s] 58it [00:34, 1.53it/s] 59it [00:35, 1.51it/s] 60it [00:35, 1.60it/s] 61it [00:36, 1.67it/s] 62it [00:36, 1.70it/s] 63it [00:37, 1.72it/s] 64it [00:38, 1.68it/s] 65it [00:38, 1.61it/s] 66it [00:39, 1.72it/s] 67it [00:39, 1.75it/s] 68it [00:40, 1.75it/s] 69it [00:40, 1.77it/s] 70it [00:41, 1.72it/s] 71it [00:42, 1.62it/s] 72it [00:42, 1.71it/s] 73it [00:43, 1.76it/s] 74it [00:43, 1.79it/s] 75it [00:44, 1.76it/s] 76it [00:44, 1.68it/s] 77it [00:45, 1.71it/s] 78it [00:46, 1.74it/s] 79it [00:46, 1.78it/s] 80it [00:47, 1.80it/s] 81it [00:47, 1.82it/s] 82it [00:48, 1.73it/s] 83it [00:48, 1.72it/s] 84it [00:49, 1.80it/s] 85it [00:50, 1.80it/s] 86it [00:50, 1.79it/s] 87it [00:51, 1.70it/s] 88it [00:51, 1.72it/s] 89it [00:52, 1.77it/s] 90it [00:52, 1.78it/s] 91it [00:53, 1.79it/s] 92it [00:54, 1.73it/s] 93it [00:54, 1.63it/s] 94it [00:55, 1.74it/s] 95it [00:55, 1.77it/s] 96it [00:56, 1.75it/s] 97it [00:56, 1.75it/s] 98it [00:57, 1.69it/s] 99it [00:58, 1.63it/s] 100it [00:58, 1.69it/s] 101it [00:59, 1.70it/s] 102it [00:59, 1.71it/s] 103it [01:00, 1.71it/s] 104it [01:01, 1.63it/s] 105it [01:01, 1.64it/s] 106it [01:02, 1.66it/s] 107it [01:03, 1.59it/s] 108it [01:03, 1.58it/s] 109it [01:04, 1.37it/s] 110it [01:05, 1.36it/s] 111it [01:06, 1.09it/s] 112it [01:07, 1.05it/s] 113it [01:08, 1.06it/s] 114it [01:09, 1.13it/s] 115it [01:10, 1.16it/s] 116it [01:11, 1.04it/s] 117it [01:12, 1.06s/it] 118it [01:14, 1.25s/it] 119it [01:15, 1.12s/it] 120it [01:16, 1.07s/it] 121it [01:16, 1.05it/s] 122it [01:17, 1.13it/s] 123it [01:18, 1.25it/s] 124it [01:18, 1.35it/s] 125it [01:19, 1.40it/s] 126it [01:20, 1.43it/s] 127it [01:20, 1.55it/s] 128it [01:21, 1.60it/s] 129it [01:21, 1.64it/s] 130it [01:22, 1.88it/s] 130it [01:22, 1.58it/s]
0it [00:00, ?it/s] 1it [00:00, 1.89it/s] 2it [00:01, 1.80it/s] 3it [00:01, 1.86it/s] 4it [00:02, 1.88it/s] 5it [00:02, 1.98it/s] 6it [00:03, 1.91it/s] 7it [00:03, 1.86it/s] 8it [00:04, 1.82it/s] 9it [00:04, 1.80it/s] 10it [00:05, 1.82it/s] 11it [00:05, 1.81it/s] 12it [00:06, 1.83it/s] 13it [00:07, 1.82it/s] 14it [00:07, 1.80it/s] 15it [00:08, 1.82it/s] 16it [00:08, 1.81it/s] 17it [00:09, 1.82it/s] 18it [00:09, 1.80it/s] 19it [00:10, 1.87it/s] 20it [00:10, 1.87it/s] 21it [00:11, 1.85it/s] 22it [00:11, 1.86it/s] 23it [00:12, 1.83it/s] 24it [00:13, 1.81it/s] 25it [00:13, 1.83it/s] 26it [00:14, 1.84it/s] 27it [00:14, 1.81it/s] 28it [00:15, 1.80it/s] 29it [00:15, 1.81it/s] 30it [00:16, 1.82it/s] 31it [00:16, 1.80it/s] 32it [00:17, 1.78it/s] 33it [00:18, 1.79it/s] 34it [00:18, 1.80it/s] 35it [00:19, 1.81it/s] 36it [00:19, 1.80it/s] 37it [00:20, 1.83it/s] 38it [00:20, 1.83it/s] 39it [00:21, 1.83it/s] 40it [00:21, 1.85it/s] 41it [00:22, 1.84it/s] 42it [00:22, 1.86it/s] 43it [00:23, 1.87it/s] 44it [00:23, 1.92it/s] 45it [00:24, 1.89it/s] 46it [00:25, 1.86it/s] 47it [00:25, 1.81it/s] 48it [00:26, 1.79it/s] 49it [00:26, 1.78it/s] 50it [00:27, 1.78it/s] 51it [00:27, 1.78it/s] 52it [00:28, 1.80it/s] 53it [00:28, 1.83it/s] 54it [00:29, 1.89it/s] 55it [00:29, 1.94it/s] 56it [00:30, 1.97it/s] 57it [00:30, 2.06it/s] 58it [00:31, 2.07it/s] 59it [00:31, 2.06it/s] 60it [00:32, 2.09it/s] 61it [00:32, 2.22it/s] 62it [00:33, 2.26it/s] 63it [00:33, 2.31it/s] 64it [00:33, 2.37it/s] 65it [00:34, 2.34it/s] 66it [00:34, 2.34it/s] 67it [00:35, 2.46it/s] 68it [00:35, 2.44it/s] 69it [00:35, 2.40it/s] 70it [00:36, 2.38it/s] 71it [00:36, 2.35it/s] 72it [00:37, 2.30it/s] 73it [00:37, 2.44it/s] 74it [00:38, 2.52it/s] 75it [00:38, 2.53it/s] 76it [00:38, 2.49it/s] 77it [00:39, 2.47it/s] 78it [00:39, 2.46it/s] 79it [00:40, 2.54it/s] 80it [00:40, 2.50it/s] 81it [00:40, 2.61it/s] 82it [00:41, 2.51it/s] 83it [00:41, 2.44it/s] 84it [00:42, 2.42it/s] 85it [00:42, 2.46it/s] 86it [00:42, 2.48it/s] 87it [00:43, 2.42it/s] 88it [00:43, 2.43it/s] 89it [00:44, 2.48it/s] 90it [00:44, 2.57it/s] 91it [00:44, 2.51it/s] 92it [00:45, 2.49it/s] 93it [00:45, 2.41it/s] 94it [00:46, 2.44it/s] 95it [00:46, 2.48it/s] 96it [00:46, 2.35it/s] 97it [00:47, 2.38it/s] 98it [00:47, 2.38it/s] 99it [00:48, 2.39it/s] 100it [00:48, 2.38it/s] 101it [00:49, 2.38it/s] 102it [00:49, 2.38it/s] 103it [00:49, 2.43it/s] 104it [00:50, 2.43it/s] 105it [00:50, 2.30it/s] 106it [00:51, 2.17it/s] 107it [00:51, 2.11it/s] 108it [00:52, 2.01it/s] 109it [00:52, 1.91it/s] 110it [00:53, 1.90it/s] 111it [00:54, 1.85it/s] 112it [00:54, 1.93it/s] 113it [00:55, 1.88it/s] 114it [00:55, 1.83it/s] 115it [00:56, 1.78it/s] 116it [00:56, 1.76it/s] 117it [00:57, 1.83it/s] 118it [00:57, 1.87it/s] 119it [00:58, 1.81it/s] 120it [00:59, 1.78it/s] 121it [00:59, 1.87it/s] 122it [00:59, 1.92it/s] 123it [01:00, 2.05it/s] 124it [01:00, 2.06it/s] 125it [01:01, 2.05it/s] 126it [01:01, 2.08it/s] 127it [01:02, 2.15it/s] 128it [01:02, 2.13it/s] 129it [01:03, 2.07it/s] 130it [01:03, 2.05it/s]
Split counts:
train: 90424
test: 19377
validation: 19375
⏰ Completed process in 0:05:21.385965
+ for split in "train" "test" "validation"
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/all
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/all
+ logger 'Processing /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/spark-all.csv train...'
+ local 'message=Processing /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/spark-all.csv train...'
++ date
+ local '_date=Wed 22 Oct 16:39:06 BST 2025'
+ local 'prefix=Wed 22 Oct 16:39:06 BST 2025'
+ echo 'Wed 22 Oct 16:39:06 BST 2025 :: Processing /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/spark-all.csv train...'
Wed 22 Oct 16:39:06 BST 2025 :: Processing /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/spark-all.csv train...
+ pandas '.query("is_train")'
+ local 'cmd=.query("is_train")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_train").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/all/scaffold-split-train.csv
+ for split in "train" "test" "validation"
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/all
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/all
+ logger 'Processing /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/spark-all.csv test...'
+ local 'message=Processing /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/spark-all.csv test...'
++ date
+ local '_date=Wed 22 Oct 16:39:10 BST 2025'
+ local 'prefix=Wed 22 Oct 16:39:10 BST 2025'
+ echo 'Wed 22 Oct 16:39:10 BST 2025 :: Processing /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/spark-all.csv test...'
Wed 22 Oct 16:39:10 BST 2025 :: Processing /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/spark-all.csv test...
+ pandas '.query("is_test")'
+ local 'cmd=.query("is_test")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_test").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/all/scaffold-split-test.csv
+ for split in "train" "test" "validation"
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/all
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/all
+ logger 'Processing /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/spark-all.csv validation...'
+ local 'message=Processing /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/spark-all.csv validation...'
++ date
+ local '_date=Wed 22 Oct 16:39:12 BST 2025'
+ local 'prefix=Wed 22 Oct 16:39:12 BST 2025'
+ echo 'Wed 22 Oct 16:39:12 BST 2025 :: Processing /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/spark-all.csv validation...'
Wed 22 Oct 16:39:12 BST 2025 :: Processing /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/spark-all.csv validation...
+ pandas '.query("is_validation")'
+ local 'cmd=.query("is_validation")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_validation").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/all/scaffold-split-validation.csv
+ unique_values species ,
+ local col=species
+ tail -n+2
+ local sep=,
+ pandas '[["species"]].drop_duplicates().sort_values("species")' ,
+ local 'cmd=[["species"]].drop_duplicates().sort_values("species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False)[["species"]].drop_duplicates().sort_values("species").to_csv(sys.stdout, index=False, sep=",")'
+ readarray -t unique_organisms
+ printf 'species_name\tn_rows\n'
+ echo 'Acinetobacter anitratus' 'Acinetobacter baumannii' 'Acinetobacter calcoaceticus' 'Acinetobacter junii' 'Alcaligenes xylosus' 'Bacillus amyloliquefaciens' 'Bacillus anthracis' 'Bacillus cereus' 'Bacillus licheniformis' 'Bacillus megaterium' 'Bacillus subtilis' 'Bacteroides fragilis' 'Brucella abortus' 'Burkholderia cepacia' 'Burkholderia thailandensis' 'Caulobacter crescentus' 'Citrobacter freundii' 'Edwardsiella tarda' 'Enterobacter aerogenes' 'Enterobacter cloacae' 'Enterococcus faecalis' 'Enterococcus faecium' 'Enterococcus hirae' 'Escherichia coli' 'Francisella novicida' 'Francisella tularensis' 'Haemophilus influenzae' 'Klebsiella aerogenes' 'Klebsiella oxytoca' 'Klebsiella pneumoniae' 'Kocuria rhizophila' 'Micrococcus luteus' 'Moraxella catarrhalis' 'Morganella morganii' 'Mycobacterium tuberculosis' 'Mycobacterium vaccae' 'Neisseria gonorrhoeae' 'Neisseria meningitidis' 'Proteus hauseri' 'Proteus mirabilis' 'Providencia stuartii' 'Pseudomonas aeruginosa' 'Pseudomonas fluorescens' 'Pseudomonas pseudoalcaligenes' 'Pseudomonas syringae' 'Salmonella enterica serovar Typhimurium' 'Salmonella enterica subsp. enterica' 'Salmonella typhimurium' 'Serratia marcescens' 'Shigella boydii' 'Staphylococcus aureus' 'Staphylococcus capitis' 'Staphylococcus epidermidis' 'Staphylococcus heamolyticus' 'Stenotrophomonas maltophilia' 'Streptococcus agalactiae' 'Streptococcus bovis' 'Streptococcus oralis' 'Streptococcus pneumoniae' 'Streptococcus pyogenes' 'Vibrio cholerae' 'Yersinia enterocolitica' 'Yersinia pestis' 'Yersinia pseudotuberculosis'
Acinetobacter anitratus Acinetobacter baumannii Acinetobacter calcoaceticus Acinetobacter junii Alcaligenes xylosus Bacillus amyloliquefaciens Bacillus anthracis Bacillus cereus Bacillus licheniformis Bacillus megaterium Bacillus subtilis Bacteroides fragilis Brucella abortus Burkholderia cepacia Burkholderia thailandensis Caulobacter crescentus Citrobacter freundii Edwardsiella tarda Enterobacter aerogenes Enterobacter cloacae Enterococcus faecalis Enterococcus faecium Enterococcus hirae Escherichia coli Francisella novicida Francisella tularensis Haemophilus influenzae Klebsiella aerogenes Klebsiella oxytoca Klebsiella pneumoniae Kocuria rhizophila Micrococcus luteus Moraxella catarrhalis Morganella morganii Mycobacterium tuberculosis Mycobacterium vaccae Neisseria gonorrhoeae Neisseria meningitidis Proteus hauseri Proteus mirabilis Providencia stuartii Pseudomonas aeruginosa Pseudomonas fluorescens Pseudomonas pseudoalcaligenes Pseudomonas syringae Salmonella enterica serovar Typhimurium Salmonella enterica subsp. enterica Salmonella typhimurium Serratia marcescens Shigella boydii Staphylococcus aureus Staphylococcus capitis Staphylococcus epidermidis Staphylococcus heamolyticus Stenotrophomonas maltophilia Streptococcus agalactiae Streptococcus bovis Streptococcus oralis Streptococcus pneumoniae Streptococcus pyogenes Vibrio cholerae Yersinia enterocolitica Yersinia pestis Yersinia pseudotuberculosis
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Acinetobacter anitratus' ']'
+ species_safe=Acinetobacter-anitratus
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Acinetobacter-anitratus
+ logger 'Processing Acinetobacter anitratus...'
+ local 'message=Processing Acinetobacter anitratus...'
++ date
+ local '_date=Wed 22 Oct 16:39:16 BST 2025'
+ local 'prefix=Wed 22 Oct 16:39:16 BST 2025'
+ echo 'Wed 22 Oct 16:39:16 BST 2025 :: Processing Acinetobacter anitratus...'
Wed 22 Oct 16:39:16 BST 2025 :: Processing Acinetobacter anitratus...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Acinetobacter-anitratus
+ pandas '; species = "Acinetobacter anitratus"; df.query("species == @species")' ,
+ local 'cmd=; species = "Acinetobacter anitratus"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Acinetobacter anitratus"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Acinetobacter-anitratus/full.csv
++ wc -l
+ data_size=4
+ logger 'Data for Acinetobacter anitratus has 4 rows'
+ local 'message=Data for Acinetobacter anitratus has 4 rows'
++ date
+ local '_date=Wed 22 Oct 16:39:17 BST 2025'
+ local 'prefix=Wed 22 Oct 16:39:17 BST 2025'
+ echo 'Wed 22 Oct 16:39:17 BST 2025 :: Data for Acinetobacter anitratus has 4 rows'
Wed 22 Oct 16:39:17 BST 2025 :: Data for Acinetobacter anitratus has 4 rows
+ '[' 4 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Acinetobacter-anitratus
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Acinetobacter baumannii' ']'
+ species_safe=Acinetobacter-baumannii
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Acinetobacter-baumannii
+ logger 'Processing Acinetobacter baumannii...'
+ local 'message=Processing Acinetobacter baumannii...'
++ date
+ local '_date=Wed 22 Oct 16:39:17 BST 2025'
+ local 'prefix=Wed 22 Oct 16:39:17 BST 2025'
+ echo 'Wed 22 Oct 16:39:17 BST 2025 :: Processing Acinetobacter baumannii...'
Wed 22 Oct 16:39:17 BST 2025 :: Processing Acinetobacter baumannii...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Acinetobacter-baumannii
+ pandas '; species = "Acinetobacter baumannii"; df.query("species == @species")' ,
+ local 'cmd=; species = "Acinetobacter baumannii"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Acinetobacter baumannii"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Acinetobacter-baumannii/full.csv
++ wc -l
+ data_size=5247
+ logger 'Data for Acinetobacter baumannii has 5247 rows'
+ local 'message=Data for Acinetobacter baumannii has 5247 rows'
++ date
+ local '_date=Wed 22 Oct 16:39:18 BST 2025'
+ local 'prefix=Wed 22 Oct 16:39:18 BST 2025'
+ echo 'Wed 22 Oct 16:39:18 BST 2025 :: Data for Acinetobacter baumannii has 5247 rows'
Wed 22 Oct 16:39:18 BST 2025 :: Data for Acinetobacter baumannii has 5247 rows
+ '[' 5247 -gt 1000 ']'
+ printf 'Acinetobacter baumannii\t5247\n'
+ schemist split /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Acinetobacter-baumannii/full.csv --type scaffold --train 0.7 --test 0.15 --seed 0 --output /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Acinetobacter-baumannii/scaffold-split.csv
🚀 Splitting table with the following parameters:
subcommand: split
output: <_io.TextIOWrapper name='/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Acinetobacter-baumannii/scaffold-split.csv' mode='w' encoding='UTF-8'>
format: None
input: <_io.TextIOWrapper name='/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Acinetobacter-baumannii/full.csv' mode='r' encoding='UTF-8'>
representation: SMILES
column: smiles
prefix: None
type: scaffold
train: 0.7
test: 0.15
seed: 0
func: <function _split at 0x7fb39be3a170>
0it [00:00, ?it/s] 1it [00:00, 1.38it/s] 2it [00:01, 1.33it/s] 3it [00:02, 1.42it/s] 4it [00:03, 1.27it/s] 5it [00:03, 1.29it/s] 6it [00:03, 1.75it/s] 6it [00:03, 1.50it/s]
0it [00:00, ?it/s] 3it [00:00, 22.34it/s] 6it [00:00, 25.71it/s]
Split counts:
train: 3673
test: 788
validation: 786
⏰ Completed process in 0:00:04.253432
+ for split in "train" "test" "validation"
+ pandas '.query("is_train")'
+ local 'cmd=.query("is_train")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_train").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Acinetobacter-baumannii/scaffold-split-train.csv
+ for split in "train" "test" "validation"
+ pandas '.query("is_test")'
+ local 'cmd=.query("is_test")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_test").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Acinetobacter-baumannii/scaffold-split-test.csv
+ for split in "train" "test" "validation"
+ pandas '.query("is_validation")'
+ local 'cmd=.query("is_validation")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_validation").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Acinetobacter-baumannii/scaffold-split-validation.csv
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Acinetobacter-baumannii/scaffold-split.csv
+ rm /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Acinetobacter-baumannii/full.csv
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Acinetobacter calcoaceticus' ']'
+ species_safe=Acinetobacter-calcoaceticus
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Acinetobacter-calcoaceticus
+ logger 'Processing Acinetobacter calcoaceticus...'
+ local 'message=Processing Acinetobacter calcoaceticus...'
++ date
+ local '_date=Wed 22 Oct 16:39:26 BST 2025'
+ local 'prefix=Wed 22 Oct 16:39:26 BST 2025'
+ echo 'Wed 22 Oct 16:39:26 BST 2025 :: Processing Acinetobacter calcoaceticus...'
Wed 22 Oct 16:39:26 BST 2025 :: Processing Acinetobacter calcoaceticus...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Acinetobacter-calcoaceticus
+ pandas '; species = "Acinetobacter calcoaceticus"; df.query("species == @species")' ,
+ local 'cmd=; species = "Acinetobacter calcoaceticus"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Acinetobacter calcoaceticus"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Acinetobacter-calcoaceticus/full.csv
++ wc -l
+ data_size=4
+ logger 'Data for Acinetobacter calcoaceticus has 4 rows'
+ local 'message=Data for Acinetobacter calcoaceticus has 4 rows'
++ date
+ local '_date=Wed 22 Oct 16:39:27 BST 2025'
+ local 'prefix=Wed 22 Oct 16:39:27 BST 2025'
+ echo 'Wed 22 Oct 16:39:27 BST 2025 :: Data for Acinetobacter calcoaceticus has 4 rows'
Wed 22 Oct 16:39:27 BST 2025 :: Data for Acinetobacter calcoaceticus has 4 rows
+ '[' 4 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Acinetobacter-calcoaceticus
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Acinetobacter junii' ']'
+ species_safe=Acinetobacter-junii
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Acinetobacter-junii
+ logger 'Processing Acinetobacter junii...'
+ local 'message=Processing Acinetobacter junii...'
++ date
+ local '_date=Wed 22 Oct 16:39:27 BST 2025'
+ local 'prefix=Wed 22 Oct 16:39:27 BST 2025'
+ echo 'Wed 22 Oct 16:39:27 BST 2025 :: Processing Acinetobacter junii...'
Wed 22 Oct 16:39:27 BST 2025 :: Processing Acinetobacter junii...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Acinetobacter-junii
+ pandas '; species = "Acinetobacter junii"; df.query("species == @species")' ,
+ local 'cmd=; species = "Acinetobacter junii"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Acinetobacter junii"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Acinetobacter-junii/full.csv
++ wc -l
+ data_size=16
+ logger 'Data for Acinetobacter junii has 16 rows'
+ local 'message=Data for Acinetobacter junii has 16 rows'
++ date
+ local '_date=Wed 22 Oct 16:39:28 BST 2025'
+ local 'prefix=Wed 22 Oct 16:39:28 BST 2025'
+ echo 'Wed 22 Oct 16:39:28 BST 2025 :: Data for Acinetobacter junii has 16 rows'
Wed 22 Oct 16:39:28 BST 2025 :: Data for Acinetobacter junii has 16 rows
+ '[' 16 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Acinetobacter-junii
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Alcaligenes xylosus' ']'
+ species_safe=Alcaligenes-xylosus
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Alcaligenes-xylosus
+ logger 'Processing Alcaligenes xylosus...'
+ local 'message=Processing Alcaligenes xylosus...'
++ date
+ local '_date=Wed 22 Oct 16:39:28 BST 2025'
+ local 'prefix=Wed 22 Oct 16:39:28 BST 2025'
+ echo 'Wed 22 Oct 16:39:28 BST 2025 :: Processing Alcaligenes xylosus...'
Wed 22 Oct 16:39:28 BST 2025 :: Processing Alcaligenes xylosus...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Alcaligenes-xylosus
+ pandas '; species = "Alcaligenes xylosus"; df.query("species == @species")' ,
+ local 'cmd=; species = "Alcaligenes xylosus"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Alcaligenes xylosus"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Alcaligenes-xylosus/full.csv
++ wc -l
+ data_size=2
+ logger 'Data for Alcaligenes xylosus has 2 rows'
+ local 'message=Data for Alcaligenes xylosus has 2 rows'
++ date
+ local '_date=Wed 22 Oct 16:39:29 BST 2025'
+ local 'prefix=Wed 22 Oct 16:39:29 BST 2025'
+ echo 'Wed 22 Oct 16:39:29 BST 2025 :: Data for Alcaligenes xylosus has 2 rows'
Wed 22 Oct 16:39:29 BST 2025 :: Data for Alcaligenes xylosus has 2 rows
+ '[' 2 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Alcaligenes-xylosus
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Bacillus amyloliquefaciens' ']'
+ species_safe=Bacillus-amyloliquefaciens
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacillus-amyloliquefaciens
+ logger 'Processing Bacillus amyloliquefaciens...'
+ local 'message=Processing Bacillus amyloliquefaciens...'
++ date
+ local '_date=Wed 22 Oct 16:39:29 BST 2025'
+ local 'prefix=Wed 22 Oct 16:39:29 BST 2025'
+ echo 'Wed 22 Oct 16:39:29 BST 2025 :: Processing Bacillus amyloliquefaciens...'
Wed 22 Oct 16:39:29 BST 2025 :: Processing Bacillus amyloliquefaciens...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacillus-amyloliquefaciens
+ pandas '; species = "Bacillus amyloliquefaciens"; df.query("species == @species")' ,
+ local 'cmd=; species = "Bacillus amyloliquefaciens"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Bacillus amyloliquefaciens"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacillus-amyloliquefaciens/full.csv
++ wc -l
+ data_size=21
+ logger 'Data for Bacillus amyloliquefaciens has 21 rows'
+ local 'message=Data for Bacillus amyloliquefaciens has 21 rows'
++ date
+ local '_date=Wed 22 Oct 16:39:31 BST 2025'
+ local 'prefix=Wed 22 Oct 16:39:31 BST 2025'
+ echo 'Wed 22 Oct 16:39:31 BST 2025 :: Data for Bacillus amyloliquefaciens has 21 rows'
Wed 22 Oct 16:39:31 BST 2025 :: Data for Bacillus amyloliquefaciens has 21 rows
+ '[' 21 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacillus-amyloliquefaciens
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Bacillus anthracis' ']'
+ species_safe=Bacillus-anthracis
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacillus-anthracis
+ logger 'Processing Bacillus anthracis...'
+ local 'message=Processing Bacillus anthracis...'
++ date
+ local '_date=Wed 22 Oct 16:39:31 BST 2025'
+ local 'prefix=Wed 22 Oct 16:39:31 BST 2025'
+ echo 'Wed 22 Oct 16:39:31 BST 2025 :: Processing Bacillus anthracis...'
Wed 22 Oct 16:39:31 BST 2025 :: Processing Bacillus anthracis...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacillus-anthracis
+ pandas '; species = "Bacillus anthracis"; df.query("species == @species")' ,
+ local 'cmd=; species = "Bacillus anthracis"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Bacillus anthracis"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacillus-anthracis/full.csv
++ wc -l
+ data_size=9940
+ logger 'Data for Bacillus anthracis has 9940 rows'
+ local 'message=Data for Bacillus anthracis has 9940 rows'
++ date
+ local '_date=Wed 22 Oct 16:39:32 BST 2025'
+ local 'prefix=Wed 22 Oct 16:39:32 BST 2025'
+ echo 'Wed 22 Oct 16:39:32 BST 2025 :: Data for Bacillus anthracis has 9940 rows'
Wed 22 Oct 16:39:32 BST 2025 :: Data for Bacillus anthracis has 9940 rows
+ '[' 9940 -gt 1000 ']'
+ printf 'Bacillus anthracis\t9940\n'
+ schemist split /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacillus-anthracis/full.csv --type scaffold --train 0.7 --test 0.15 --seed 0 --output /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacillus-anthracis/scaffold-split.csv
🚀 Splitting table with the following parameters:
subcommand: split
output: <_io.TextIOWrapper name='/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacillus-anthracis/scaffold-split.csv' mode='w' encoding='UTF-8'>
format: None
input: <_io.TextIOWrapper name='/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacillus-anthracis/full.csv' mode='r' encoding='UTF-8'>
representation: SMILES
column: smiles
prefix: None
type: scaffold
train: 0.7
test: 0.15
seed: 0
func: <function _split at 0x7f0dc935e170>
0it [00:00, ?it/s] 1it [00:00, 1.80it/s] 2it [00:01, 1.75it/s] 3it [00:01, 1.80it/s] 4it [00:02, 1.83it/s] 5it [00:02, 1.81it/s] 6it [00:03, 1.82it/s] 7it [00:03, 1.81it/s] 8it [00:04, 1.82it/s] 9it [00:04, 1.82it/s] 10it [00:05, 1.85it/s] 10it [00:05, 1.82it/s]
0it [00:00, ?it/s] 2it [00:00, 16.31it/s] 4it [00:00, 16.82it/s] 6it [00:00, 16.70it/s] 8it [00:00, 16.51it/s] 10it [00:00, 16.71it/s] 10it [00:00, 16.66it/s]
Split counts:
train: 6958
test: 1491
validation: 1491
⏰ Completed process in 0:00:06.129278
+ for split in "train" "test" "validation"
+ pandas '.query("is_train")'
+ local 'cmd=.query("is_train")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_train").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacillus-anthracis/scaffold-split-train.csv
+ for split in "train" "test" "validation"
+ pandas '.query("is_test")'
+ local 'cmd=.query("is_test")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_test").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacillus-anthracis/scaffold-split-test.csv
+ for split in "train" "test" "validation"
+ pandas '.query("is_validation")'
+ local 'cmd=.query("is_validation")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_validation").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacillus-anthracis/scaffold-split-validation.csv
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacillus-anthracis/scaffold-split.csv
+ rm /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacillus-anthracis/full.csv
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Bacillus cereus' ']'
+ species_safe=Bacillus-cereus
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacillus-cereus
+ logger 'Processing Bacillus cereus...'
+ local 'message=Processing Bacillus cereus...'
++ date
+ local '_date=Wed 22 Oct 16:39:42 BST 2025'
+ local 'prefix=Wed 22 Oct 16:39:42 BST 2025'
+ echo 'Wed 22 Oct 16:39:42 BST 2025 :: Processing Bacillus cereus...'
Wed 22 Oct 16:39:42 BST 2025 :: Processing Bacillus cereus...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacillus-cereus
+ pandas '; species = "Bacillus cereus"; df.query("species == @species")' ,
+ local 'cmd=; species = "Bacillus cereus"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Bacillus cereus"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacillus-cereus/full.csv
++ wc -l
+ data_size=38
+ logger 'Data for Bacillus cereus has 38 rows'
+ local 'message=Data for Bacillus cereus has 38 rows'
++ date
+ local '_date=Wed 22 Oct 16:39:43 BST 2025'
+ local 'prefix=Wed 22 Oct 16:39:43 BST 2025'
+ echo 'Wed 22 Oct 16:39:43 BST 2025 :: Data for Bacillus cereus has 38 rows'
Wed 22 Oct 16:39:43 BST 2025 :: Data for Bacillus cereus has 38 rows
+ '[' 38 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacillus-cereus
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Bacillus licheniformis' ']'
+ species_safe=Bacillus-licheniformis
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacillus-licheniformis
+ logger 'Processing Bacillus licheniformis...'
+ local 'message=Processing Bacillus licheniformis...'
++ date
+ local '_date=Wed 22 Oct 16:39:43 BST 2025'
+ local 'prefix=Wed 22 Oct 16:39:43 BST 2025'
+ echo 'Wed 22 Oct 16:39:43 BST 2025 :: Processing Bacillus licheniformis...'
Wed 22 Oct 16:39:43 BST 2025 :: Processing Bacillus licheniformis...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacillus-licheniformis
+ pandas '; species = "Bacillus licheniformis"; df.query("species == @species")' ,
+ local 'cmd=; species = "Bacillus licheniformis"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Bacillus licheniformis"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacillus-licheniformis/full.csv
++ wc -l
+ data_size=9
+ logger 'Data for Bacillus licheniformis has 9 rows'
+ local 'message=Data for Bacillus licheniformis has 9 rows'
++ date
+ local '_date=Wed 22 Oct 16:39:44 BST 2025'
+ local 'prefix=Wed 22 Oct 16:39:44 BST 2025'
+ echo 'Wed 22 Oct 16:39:44 BST 2025 :: Data for Bacillus licheniformis has 9 rows'
Wed 22 Oct 16:39:44 BST 2025 :: Data for Bacillus licheniformis has 9 rows
+ '[' 9 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacillus-licheniformis
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Bacillus megaterium' ']'
+ species_safe=Bacillus-megaterium
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacillus-megaterium
+ logger 'Processing Bacillus megaterium...'
+ local 'message=Processing Bacillus megaterium...'
++ date
+ local '_date=Wed 22 Oct 16:39:44 BST 2025'
+ local 'prefix=Wed 22 Oct 16:39:44 BST 2025'
+ echo 'Wed 22 Oct 16:39:44 BST 2025 :: Processing Bacillus megaterium...'
Wed 22 Oct 16:39:44 BST 2025 :: Processing Bacillus megaterium...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacillus-megaterium
+ pandas '; species = "Bacillus megaterium"; df.query("species == @species")' ,
+ local 'cmd=; species = "Bacillus megaterium"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Bacillus megaterium"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacillus-megaterium/full.csv
++ wc -l
+ data_size=16
+ logger 'Data for Bacillus megaterium has 16 rows'
+ local 'message=Data for Bacillus megaterium has 16 rows'
++ date
+ local '_date=Wed 22 Oct 16:39:46 BST 2025'
+ local 'prefix=Wed 22 Oct 16:39:46 BST 2025'
+ echo 'Wed 22 Oct 16:39:46 BST 2025 :: Data for Bacillus megaterium has 16 rows'
Wed 22 Oct 16:39:46 BST 2025 :: Data for Bacillus megaterium has 16 rows
+ '[' 16 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacillus-megaterium
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Bacillus subtilis' ']'
+ species_safe=Bacillus-subtilis
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacillus-subtilis
+ logger 'Processing Bacillus subtilis...'
+ local 'message=Processing Bacillus subtilis...'
++ date
+ local '_date=Wed 22 Oct 16:39:46 BST 2025'
+ local 'prefix=Wed 22 Oct 16:39:46 BST 2025'
+ echo 'Wed 22 Oct 16:39:46 BST 2025 :: Processing Bacillus subtilis...'
Wed 22 Oct 16:39:46 BST 2025 :: Processing Bacillus subtilis...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacillus-subtilis
+ pandas '; species = "Bacillus subtilis"; df.query("species == @species")' ,
+ local 'cmd=; species = "Bacillus subtilis"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Bacillus subtilis"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacillus-subtilis/full.csv
++ wc -l
+ data_size=315
+ logger 'Data for Bacillus subtilis has 315 rows'
+ local 'message=Data for Bacillus subtilis has 315 rows'
++ date
+ local '_date=Wed 22 Oct 16:39:47 BST 2025'
+ local 'prefix=Wed 22 Oct 16:39:47 BST 2025'
+ echo 'Wed 22 Oct 16:39:47 BST 2025 :: Data for Bacillus subtilis has 315 rows'
Wed 22 Oct 16:39:47 BST 2025 :: Data for Bacillus subtilis has 315 rows
+ '[' 315 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacillus-subtilis
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Bacteroides fragilis' ']'
+ species_safe=Bacteroides-fragilis
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacteroides-fragilis
+ logger 'Processing Bacteroides fragilis...'
+ local 'message=Processing Bacteroides fragilis...'
++ date
+ local '_date=Wed 22 Oct 16:39:47 BST 2025'
+ local 'prefix=Wed 22 Oct 16:39:47 BST 2025'
+ echo 'Wed 22 Oct 16:39:47 BST 2025 :: Processing Bacteroides fragilis...'
Wed 22 Oct 16:39:47 BST 2025 :: Processing Bacteroides fragilis...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacteroides-fragilis
+ pandas '; species = "Bacteroides fragilis"; df.query("species == @species")' ,
+ local 'cmd=; species = "Bacteroides fragilis"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Bacteroides fragilis"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacteroides-fragilis/full.csv
++ wc -l
+ data_size=1
+ logger 'Data for Bacteroides fragilis has 1 rows'
+ local 'message=Data for Bacteroides fragilis has 1 rows'
++ date
+ local '_date=Wed 22 Oct 16:39:49 BST 2025'
+ local 'prefix=Wed 22 Oct 16:39:49 BST 2025'
+ echo 'Wed 22 Oct 16:39:49 BST 2025 :: Data for Bacteroides fragilis has 1 rows'
Wed 22 Oct 16:39:49 BST 2025 :: Data for Bacteroides fragilis has 1 rows
+ '[' 1 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Bacteroides-fragilis
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Brucella abortus' ']'
+ species_safe=Brucella-abortus
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Brucella-abortus
+ logger 'Processing Brucella abortus...'
+ local 'message=Processing Brucella abortus...'
++ date
+ local '_date=Wed 22 Oct 16:39:49 BST 2025'
+ local 'prefix=Wed 22 Oct 16:39:49 BST 2025'
+ echo 'Wed 22 Oct 16:39:49 BST 2025 :: Processing Brucella abortus...'
Wed 22 Oct 16:39:49 BST 2025 :: Processing Brucella abortus...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Brucella-abortus
+ pandas '; species = "Brucella abortus"; df.query("species == @species")' ,
+ local 'cmd=; species = "Brucella abortus"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Brucella abortus"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Brucella-abortus/full.csv
++ wc -l
+ data_size=9947
+ logger 'Data for Brucella abortus has 9947 rows'
+ local 'message=Data for Brucella abortus has 9947 rows'
++ date
+ local '_date=Wed 22 Oct 16:39:50 BST 2025'
+ local 'prefix=Wed 22 Oct 16:39:50 BST 2025'
+ echo 'Wed 22 Oct 16:39:50 BST 2025 :: Data for Brucella abortus has 9947 rows'
Wed 22 Oct 16:39:50 BST 2025 :: Data for Brucella abortus has 9947 rows
+ '[' 9947 -gt 1000 ']'
+ printf 'Brucella abortus\t9947\n'
+ schemist split /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Brucella-abortus/full.csv --type scaffold --train 0.7 --test 0.15 --seed 0 --output /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Brucella-abortus/scaffold-split.csv
🚀 Splitting table with the following parameters:
subcommand: split
output: <_io.TextIOWrapper name='/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Brucella-abortus/scaffold-split.csv' mode='w' encoding='UTF-8'>
format: None
input: <_io.TextIOWrapper name='/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Brucella-abortus/full.csv' mode='r' encoding='UTF-8'>
representation: SMILES
column: smiles
prefix: None
type: scaffold
train: 0.7
test: 0.15
seed: 0
func: <function _split at 0x7f478e32e170>
0it [00:00, ?it/s] 1it [00:00, 1.80it/s] 2it [00:01, 1.76it/s] 3it [00:01, 1.81it/s] 4it [00:02, 1.84it/s] 5it [00:02, 1.82it/s] 6it [00:03, 1.83it/s] 7it [00:03, 1.83it/s] 8it [00:04, 1.84it/s] 9it [00:04, 1.83it/s] 10it [00:05, 1.88it/s] 10it [00:05, 1.84it/s]
0it [00:00, ?it/s] 2it [00:00, 16.06it/s] 4it [00:00, 16.63it/s] 6it [00:00, 16.54it/s] 8it [00:00, 16.35it/s] 10it [00:00, 16.57it/s] 10it [00:00, 16.50it/s]
Split counts:
train: 6963
test: 1493
validation: 1491
⏰ Completed process in 0:00:06.121434
+ for split in "train" "test" "validation"
+ pandas '.query("is_train")'
+ local 'cmd=.query("is_train")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_train").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Brucella-abortus/scaffold-split-train.csv
+ for split in "train" "test" "validation"
+ pandas '.query("is_test")'
+ local 'cmd=.query("is_test")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_test").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Brucella-abortus/scaffold-split-test.csv
+ for split in "train" "test" "validation"
+ pandas '.query("is_validation")'
+ local 'cmd=.query("is_validation")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_validation").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Brucella-abortus/scaffold-split-validation.csv
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Brucella-abortus/scaffold-split.csv
+ rm /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Brucella-abortus/full.csv
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Burkholderia cepacia' ']'
+ species_safe=Burkholderia-cepacia
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Burkholderia-cepacia
+ logger 'Processing Burkholderia cepacia...'
+ local 'message=Processing Burkholderia cepacia...'
++ date
+ local '_date=Wed 22 Oct 16:40:02 BST 2025'
+ local 'prefix=Wed 22 Oct 16:40:02 BST 2025'
+ echo 'Wed 22 Oct 16:40:02 BST 2025 :: Processing Burkholderia cepacia...'
Wed 22 Oct 16:40:02 BST 2025 :: Processing Burkholderia cepacia...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Burkholderia-cepacia
+ pandas '; species = "Burkholderia cepacia"; df.query("species == @species")' ,
+ local 'cmd=; species = "Burkholderia cepacia"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Burkholderia cepacia"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Burkholderia-cepacia/full.csv
++ wc -l
+ data_size=9
+ logger 'Data for Burkholderia cepacia has 9 rows'
+ local 'message=Data for Burkholderia cepacia has 9 rows'
++ date
+ local '_date=Wed 22 Oct 16:40:03 BST 2025'
+ local 'prefix=Wed 22 Oct 16:40:03 BST 2025'
+ echo 'Wed 22 Oct 16:40:03 BST 2025 :: Data for Burkholderia cepacia has 9 rows'
Wed 22 Oct 16:40:03 BST 2025 :: Data for Burkholderia cepacia has 9 rows
+ '[' 9 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Burkholderia-cepacia
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Burkholderia thailandensis' ']'
+ species_safe=Burkholderia-thailandensis
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Burkholderia-thailandensis
+ logger 'Processing Burkholderia thailandensis...'
+ local 'message=Processing Burkholderia thailandensis...'
++ date
+ local '_date=Wed 22 Oct 16:40:03 BST 2025'
+ local 'prefix=Wed 22 Oct 16:40:03 BST 2025'
+ echo 'Wed 22 Oct 16:40:03 BST 2025 :: Processing Burkholderia thailandensis...'
Wed 22 Oct 16:40:03 BST 2025 :: Processing Burkholderia thailandensis...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Burkholderia-thailandensis
+ pandas '; species = "Burkholderia thailandensis"; df.query("species == @species")' ,
+ local 'cmd=; species = "Burkholderia thailandensis"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Burkholderia thailandensis"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Burkholderia-thailandensis/full.csv
++ wc -l
+ data_size=725
+ logger 'Data for Burkholderia thailandensis has 725 rows'
+ local 'message=Data for Burkholderia thailandensis has 725 rows'
++ date
+ local '_date=Wed 22 Oct 16:40:05 BST 2025'
+ local 'prefix=Wed 22 Oct 16:40:05 BST 2025'
+ echo 'Wed 22 Oct 16:40:05 BST 2025 :: Data for Burkholderia thailandensis has 725 rows'
Wed 22 Oct 16:40:05 BST 2025 :: Data for Burkholderia thailandensis has 725 rows
+ '[' 725 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Burkholderia-thailandensis
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Caulobacter crescentus' ']'
+ species_safe=Caulobacter-crescentus
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Caulobacter-crescentus
+ logger 'Processing Caulobacter crescentus...'
+ local 'message=Processing Caulobacter crescentus...'
++ date
+ local '_date=Wed 22 Oct 16:40:05 BST 2025'
+ local 'prefix=Wed 22 Oct 16:40:05 BST 2025'
+ echo 'Wed 22 Oct 16:40:05 BST 2025 :: Processing Caulobacter crescentus...'
Wed 22 Oct 16:40:05 BST 2025 :: Processing Caulobacter crescentus...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Caulobacter-crescentus
+ pandas '; species = "Caulobacter crescentus"; df.query("species == @species")' ,
+ local 'cmd=; species = "Caulobacter crescentus"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Caulobacter crescentus"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Caulobacter-crescentus/full.csv
++ wc -l
+ data_size=37
+ logger 'Data for Caulobacter crescentus has 37 rows'
+ local 'message=Data for Caulobacter crescentus has 37 rows'
++ date
+ local '_date=Wed 22 Oct 16:40:06 BST 2025'
+ local 'prefix=Wed 22 Oct 16:40:06 BST 2025'
+ echo 'Wed 22 Oct 16:40:06 BST 2025 :: Data for Caulobacter crescentus has 37 rows'
Wed 22 Oct 16:40:06 BST 2025 :: Data for Caulobacter crescentus has 37 rows
+ '[' 37 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Caulobacter-crescentus
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Citrobacter freundii' ']'
+ species_safe=Citrobacter-freundii
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Citrobacter-freundii
+ logger 'Processing Citrobacter freundii...'
+ local 'message=Processing Citrobacter freundii...'
++ date
+ local '_date=Wed 22 Oct 16:40:06 BST 2025'
+ local 'prefix=Wed 22 Oct 16:40:06 BST 2025'
+ echo 'Wed 22 Oct 16:40:06 BST 2025 :: Processing Citrobacter freundii...'
Wed 22 Oct 16:40:06 BST 2025 :: Processing Citrobacter freundii...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Citrobacter-freundii
+ pandas '; species = "Citrobacter freundii"; df.query("species == @species")' ,
+ local 'cmd=; species = "Citrobacter freundii"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Citrobacter freundii"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Citrobacter-freundii/full.csv
++ wc -l
+ data_size=15
+ logger 'Data for Citrobacter freundii has 15 rows'
+ local 'message=Data for Citrobacter freundii has 15 rows'
++ date
+ local '_date=Wed 22 Oct 16:40:07 BST 2025'
+ local 'prefix=Wed 22 Oct 16:40:07 BST 2025'
+ echo 'Wed 22 Oct 16:40:07 BST 2025 :: Data for Citrobacter freundii has 15 rows'
Wed 22 Oct 16:40:07 BST 2025 :: Data for Citrobacter freundii has 15 rows
+ '[' 15 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Citrobacter-freundii
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Edwardsiella tarda' ']'
+ species_safe=Edwardsiella-tarda
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Edwardsiella-tarda
+ logger 'Processing Edwardsiella tarda...'
+ local 'message=Processing Edwardsiella tarda...'
++ date
+ local '_date=Wed 22 Oct 16:40:07 BST 2025'
+ local 'prefix=Wed 22 Oct 16:40:07 BST 2025'
+ echo 'Wed 22 Oct 16:40:07 BST 2025 :: Processing Edwardsiella tarda...'
Wed 22 Oct 16:40:07 BST 2025 :: Processing Edwardsiella tarda...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Edwardsiella-tarda
+ pandas '; species = "Edwardsiella tarda"; df.query("species == @species")' ,
+ local 'cmd=; species = "Edwardsiella tarda"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Edwardsiella tarda"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Edwardsiella-tarda/full.csv
++ wc -l
+ data_size=16
+ logger 'Data for Edwardsiella tarda has 16 rows'
+ local 'message=Data for Edwardsiella tarda has 16 rows'
++ date
+ local '_date=Wed 22 Oct 16:40:08 BST 2025'
+ local 'prefix=Wed 22 Oct 16:40:08 BST 2025'
+ echo 'Wed 22 Oct 16:40:08 BST 2025 :: Data for Edwardsiella tarda has 16 rows'
Wed 22 Oct 16:40:08 BST 2025 :: Data for Edwardsiella tarda has 16 rows
+ '[' 16 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Edwardsiella-tarda
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Enterobacter aerogenes' ']'
+ species_safe=Enterobacter-aerogenes
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Enterobacter-aerogenes
+ logger 'Processing Enterobacter aerogenes...'
+ local 'message=Processing Enterobacter aerogenes...'
++ date
+ local '_date=Wed 22 Oct 16:40:08 BST 2025'
+ local 'prefix=Wed 22 Oct 16:40:08 BST 2025'
+ echo 'Wed 22 Oct 16:40:08 BST 2025 :: Processing Enterobacter aerogenes...'
Wed 22 Oct 16:40:08 BST 2025 :: Processing Enterobacter aerogenes...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Enterobacter-aerogenes
+ pandas '; species = "Enterobacter aerogenes"; df.query("species == @species")' ,
+ local 'cmd=; species = "Enterobacter aerogenes"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Enterobacter aerogenes"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Enterobacter-aerogenes/full.csv
++ wc -l
+ data_size=24
+ logger 'Data for Enterobacter aerogenes has 24 rows'
+ local 'message=Data for Enterobacter aerogenes has 24 rows'
++ date
+ local '_date=Wed 22 Oct 16:40:10 BST 2025'
+ local 'prefix=Wed 22 Oct 16:40:10 BST 2025'
+ echo 'Wed 22 Oct 16:40:10 BST 2025 :: Data for Enterobacter aerogenes has 24 rows'
Wed 22 Oct 16:40:10 BST 2025 :: Data for Enterobacter aerogenes has 24 rows
+ '[' 24 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Enterobacter-aerogenes
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Enterobacter cloacae' ']'
+ species_safe=Enterobacter-cloacae
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Enterobacter-cloacae
+ logger 'Processing Enterobacter cloacae...'
+ local 'message=Processing Enterobacter cloacae...'
++ date
+ local '_date=Wed 22 Oct 16:40:10 BST 2025'
+ local 'prefix=Wed 22 Oct 16:40:10 BST 2025'
+ echo 'Wed 22 Oct 16:40:10 BST 2025 :: Processing Enterobacter cloacae...'
Wed 22 Oct 16:40:10 BST 2025 :: Processing Enterobacter cloacae...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Enterobacter-cloacae
+ pandas '; species = "Enterobacter cloacae"; df.query("species == @species")' ,
+ local 'cmd=; species = "Enterobacter cloacae"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Enterobacter cloacae"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Enterobacter-cloacae/full.csv
++ wc -l
+ data_size=201
+ logger 'Data for Enterobacter cloacae has 201 rows'
+ local 'message=Data for Enterobacter cloacae has 201 rows'
++ date
+ local '_date=Wed 22 Oct 16:40:11 BST 2025'
+ local 'prefix=Wed 22 Oct 16:40:11 BST 2025'
+ echo 'Wed 22 Oct 16:40:11 BST 2025 :: Data for Enterobacter cloacae has 201 rows'
Wed 22 Oct 16:40:11 BST 2025 :: Data for Enterobacter cloacae has 201 rows
+ '[' 201 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Enterobacter-cloacae
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Enterococcus faecalis' ']'
+ species_safe=Enterococcus-faecalis
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Enterococcus-faecalis
+ logger 'Processing Enterococcus faecalis...'
+ local 'message=Processing Enterococcus faecalis...'
++ date
+ local '_date=Wed 22 Oct 16:40:11 BST 2025'
+ local 'prefix=Wed 22 Oct 16:40:11 BST 2025'
+ echo 'Wed 22 Oct 16:40:11 BST 2025 :: Processing Enterococcus faecalis...'
Wed 22 Oct 16:40:11 BST 2025 :: Processing Enterococcus faecalis...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Enterococcus-faecalis
+ pandas '; species = "Enterococcus faecalis"; df.query("species == @species")' ,
+ local 'cmd=; species = "Enterococcus faecalis"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Enterococcus faecalis"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Enterococcus-faecalis/full.csv
++ wc -l
+ data_size=1342
+ logger 'Data for Enterococcus faecalis has 1342 rows'
+ local 'message=Data for Enterococcus faecalis has 1342 rows'
++ date
+ local '_date=Wed 22 Oct 16:40:12 BST 2025'
+ local 'prefix=Wed 22 Oct 16:40:12 BST 2025'
+ echo 'Wed 22 Oct 16:40:12 BST 2025 :: Data for Enterococcus faecalis has 1342 rows'
Wed 22 Oct 16:40:12 BST 2025 :: Data for Enterococcus faecalis has 1342 rows
+ '[' 1342 -gt 1000 ']'
+ printf 'Enterococcus faecalis\t1342\n'
+ schemist split /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Enterococcus-faecalis/full.csv --type scaffold --train 0.7 --test 0.15 --seed 0 --output /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Enterococcus-faecalis/scaffold-split.csv
🚀 Splitting table with the following parameters:
subcommand: split
output: <_io.TextIOWrapper name='/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Enterococcus-faecalis/scaffold-split.csv' mode='w' encoding='UTF-8'>
format: None
input: <_io.TextIOWrapper name='/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Enterococcus-faecalis/full.csv' mode='r' encoding='UTF-8'>
representation: SMILES
column: smiles
prefix: None
type: scaffold
train: 0.7
test: 0.15
seed: 0
func: <function _split at 0x7f35dc08e170>
0it [00:00, ?it/s] 1it [00:00, 1.39it/s] 2it [00:01, 2.09it/s] 2it [00:01, 1.94it/s]
0it [00:00, ?it/s] 2it [00:00, 44.51it/s]
Split counts:
train: 940
test: 202
validation: 200
⏰ Completed process in 0:00:01.090359
+ for split in "train" "test" "validation"
+ pandas '.query("is_train")'
+ local 'cmd=.query("is_train")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_train").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Enterococcus-faecalis/scaffold-split-train.csv
+ for split in "train" "test" "validation"
+ pandas '.query("is_test")'
+ local 'cmd=.query("is_test")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_test").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Enterococcus-faecalis/scaffold-split-test.csv
+ for split in "train" "test" "validation"
+ pandas '.query("is_validation")'
+ local 'cmd=.query("is_validation")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_validation").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Enterococcus-faecalis/scaffold-split-validation.csv
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Enterococcus-faecalis/scaffold-split.csv
+ rm /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Enterococcus-faecalis/full.csv
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Enterococcus faecium' ']'
+ species_safe=Enterococcus-faecium
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Enterococcus-faecium
+ logger 'Processing Enterococcus faecium...'
+ local 'message=Processing Enterococcus faecium...'
++ date
+ local '_date=Wed 22 Oct 16:40:19 BST 2025'
+ local 'prefix=Wed 22 Oct 16:40:19 BST 2025'
+ echo 'Wed 22 Oct 16:40:19 BST 2025 :: Processing Enterococcus faecium...'
Wed 22 Oct 16:40:19 BST 2025 :: Processing Enterococcus faecium...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Enterococcus-faecium
+ pandas '; species = "Enterococcus faecium"; df.query("species == @species")' ,
+ local 'cmd=; species = "Enterococcus faecium"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Enterococcus faecium"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Enterococcus-faecium/full.csv
++ wc -l
+ data_size=276
+ logger 'Data for Enterococcus faecium has 276 rows'
+ local 'message=Data for Enterococcus faecium has 276 rows'
++ date
+ local '_date=Wed 22 Oct 16:40:20 BST 2025'
+ local 'prefix=Wed 22 Oct 16:40:20 BST 2025'
+ echo 'Wed 22 Oct 16:40:20 BST 2025 :: Data for Enterococcus faecium has 276 rows'
Wed 22 Oct 16:40:20 BST 2025 :: Data for Enterococcus faecium has 276 rows
+ '[' 276 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Enterococcus-faecium
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Enterococcus hirae' ']'
+ species_safe=Enterococcus-hirae
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Enterococcus-hirae
+ logger 'Processing Enterococcus hirae...'
+ local 'message=Processing Enterococcus hirae...'
++ date
+ local '_date=Wed 22 Oct 16:40:20 BST 2025'
+ local 'prefix=Wed 22 Oct 16:40:20 BST 2025'
+ echo 'Wed 22 Oct 16:40:20 BST 2025 :: Processing Enterococcus hirae...'
Wed 22 Oct 16:40:20 BST 2025 :: Processing Enterococcus hirae...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Enterococcus-hirae
+ pandas '; species = "Enterococcus hirae"; df.query("species == @species")' ,
+ local 'cmd=; species = "Enterococcus hirae"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Enterococcus hirae"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Enterococcus-hirae/full.csv
++ wc -l
+ data_size=153
+ logger 'Data for Enterococcus hirae has 153 rows'
+ local 'message=Data for Enterococcus hirae has 153 rows'
++ date
+ local '_date=Wed 22 Oct 16:40:22 BST 2025'
+ local 'prefix=Wed 22 Oct 16:40:22 BST 2025'
+ echo 'Wed 22 Oct 16:40:22 BST 2025 :: Data for Enterococcus hirae has 153 rows'
Wed 22 Oct 16:40:22 BST 2025 :: Data for Enterococcus hirae has 153 rows
+ '[' 153 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Enterococcus-hirae
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Escherichia coli' ']'
+ species_safe=Escherichia-coli
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Escherichia-coli
+ logger 'Processing Escherichia coli...'
+ local 'message=Processing Escherichia coli...'
++ date
+ local '_date=Wed 22 Oct 16:40:22 BST 2025'
+ local 'prefix=Wed 22 Oct 16:40:22 BST 2025'
+ echo 'Wed 22 Oct 16:40:22 BST 2025 :: Processing Escherichia coli...'
Wed 22 Oct 16:40:22 BST 2025 :: Processing Escherichia coli...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Escherichia-coli
+ pandas '; species = "Escherichia coli"; df.query("species == @species")' ,
+ local 'cmd=; species = "Escherichia coli"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Escherichia coli"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Escherichia-coli/full.csv
++ wc -l
+ data_size=28850
+ logger 'Data for Escherichia coli has 28850 rows'
+ local 'message=Data for Escherichia coli has 28850 rows'
++ date
+ local '_date=Wed 22 Oct 16:40:23 BST 2025'
+ local 'prefix=Wed 22 Oct 16:40:23 BST 2025'
+ echo 'Wed 22 Oct 16:40:23 BST 2025 :: Data for Escherichia coli has 28850 rows'
Wed 22 Oct 16:40:23 BST 2025 :: Data for Escherichia coli has 28850 rows
+ '[' 28850 -gt 1000 ']'
+ printf 'Escherichia coli\t28850\n'
+ schemist split /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Escherichia-coli/full.csv --type scaffold --train 0.7 --test 0.15 --seed 0 --output /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Escherichia-coli/scaffold-split.csv
🚀 Splitting table with the following parameters:
subcommand: split
output: <_io.TextIOWrapper name='/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Escherichia-coli/scaffold-split.csv' mode='w' encoding='UTF-8'>
format: None
input: <_io.TextIOWrapper name='/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Escherichia-coli/full.csv' mode='r' encoding='UTF-8'>
representation: SMILES
column: smiles
prefix: None
type: scaffold
train: 0.7
test: 0.15
seed: 0
func: <function _split at 0x7fd9b6ee6170>
0it [00:00, ?it/s] 1it [00:00, 1.16it/s] 2it [00:01, 1.29it/s] 3it [00:02, 1.38it/s] 4it [00:03, 1.17it/s] 5it [00:03, 1.27it/s] 6it [00:04, 1.35it/s] 7it [00:05, 1.42it/s] 8it [00:05, 1.48it/s] 9it [00:06, 1.52it/s] 10it [00:07, 1.56it/s] 11it [00:07, 1.60it/s] 12it [00:08, 1.61it/s] 13it [00:08, 1.63it/s] 14it [00:09, 1.62it/s] 15it [00:10, 1.60it/s] 16it [00:10, 1.60it/s] 17it [00:11, 1.55it/s] 18it [00:12, 1.36it/s] 19it [00:13, 1.24it/s] 20it [00:14, 1.15it/s] 21it [00:15, 1.09it/s] 22it [00:16, 1.11it/s] 23it [00:17, 1.05it/s] 24it [00:18, 1.05s/it] 25it [00:19, 1.02it/s] 26it [00:20, 1.03it/s] 27it [00:21, 1.12it/s] 28it [00:21, 1.22it/s] 29it [00:22, 1.35it/s] 29it [00:22, 1.30it/s]
0it [00:00, ?it/s] 1it [00:00, 7.13it/s] 2it [00:00, 7.52it/s] 3it [00:00, 7.24it/s] 4it [00:00, 7.32it/s] 5it [00:00, 7.64it/s] 6it [00:00, 8.03it/s] 7it [00:00, 8.25it/s] 8it [00:00, 8.62it/s] 9it [00:01, 8.83it/s] 11it [00:01, 9.38it/s] 12it [00:01, 9.43it/s] 13it [00:01, 9.49it/s] 14it [00:01, 9.43it/s] 15it [00:01, 9.46it/s] 16it [00:01, 9.33it/s] 17it [00:01, 8.79it/s] 18it [00:02, 8.30it/s] 19it [00:02, 7.90it/s] 20it [00:02, 7.82it/s] 21it [00:02, 7.67it/s] 22it [00:02, 7.42it/s] 23it [00:02, 7.30it/s] 24it [00:02, 7.40it/s] 25it [00:03, 7.29it/s] 26it [00:03, 7.33it/s] 27it [00:03, 7.70it/s] 28it [00:03, 7.95it/s] 29it [00:03, 8.39it/s] 29it [00:03, 8.19it/s]
Split counts:
train: 20177
test: 4324
validation: 4323
⏰ Completed process in 0:00:26.038792
+ for split in "train" "test" "validation"
+ pandas '.query("is_train")'
+ local 'cmd=.query("is_train")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_train").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Escherichia-coli/scaffold-split-train.csv
+ for split in "train" "test" "validation"
+ pandas '.query("is_test")'
+ local 'cmd=.query("is_test")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_test").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Escherichia-coli/scaffold-split-test.csv
+ for split in "train" "test" "validation"
+ pandas '.query("is_validation")'
+ local 'cmd=.query("is_validation")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_validation").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Escherichia-coli/scaffold-split-validation.csv
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Escherichia-coli/scaffold-split.csv
+ rm /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Escherichia-coli/full.csv
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Francisella novicida' ']'
+ species_safe=Francisella-novicida
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Francisella-novicida
+ logger 'Processing Francisella novicida...'
+ local 'message=Processing Francisella novicida...'
++ date
+ local '_date=Wed 22 Oct 16:40:56 BST 2025'
+ local 'prefix=Wed 22 Oct 16:40:56 BST 2025'
+ echo 'Wed 22 Oct 16:40:56 BST 2025 :: Processing Francisella novicida...'
Wed 22 Oct 16:40:56 BST 2025 :: Processing Francisella novicida...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Francisella-novicida
+ pandas '; species = "Francisella novicida"; df.query("species == @species")' ,
+ local 'cmd=; species = "Francisella novicida"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Francisella novicida"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Francisella-novicida/full.csv
++ wc -l
+ data_size=4
+ logger 'Data for Francisella novicida has 4 rows'
+ local 'message=Data for Francisella novicida has 4 rows'
++ date
+ local '_date=Wed 22 Oct 16:40:57 BST 2025'
+ local 'prefix=Wed 22 Oct 16:40:57 BST 2025'
+ echo 'Wed 22 Oct 16:40:57 BST 2025 :: Data for Francisella novicida has 4 rows'
Wed 22 Oct 16:40:57 BST 2025 :: Data for Francisella novicida has 4 rows
+ '[' 4 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Francisella-novicida
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Francisella tularensis' ']'
+ species_safe=Francisella-tularensis
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Francisella-tularensis
+ logger 'Processing Francisella tularensis...'
+ local 'message=Processing Francisella tularensis...'
++ date
+ local '_date=Wed 22 Oct 16:40:57 BST 2025'
+ local 'prefix=Wed 22 Oct 16:40:57 BST 2025'
+ echo 'Wed 22 Oct 16:40:57 BST 2025 :: Processing Francisella tularensis...'
Wed 22 Oct 16:40:57 BST 2025 :: Processing Francisella tularensis...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Francisella-tularensis
+ pandas '; species = "Francisella tularensis"; df.query("species == @species")' ,
+ local 'cmd=; species = "Francisella tularensis"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Francisella tularensis"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Francisella-tularensis/full.csv
++ wc -l
+ data_size=9681
+ logger 'Data for Francisella tularensis has 9681 rows'
+ local 'message=Data for Francisella tularensis has 9681 rows'
++ date
+ local '_date=Wed 22 Oct 16:40:59 BST 2025'
+ local 'prefix=Wed 22 Oct 16:40:59 BST 2025'
+ echo 'Wed 22 Oct 16:40:59 BST 2025 :: Data for Francisella tularensis has 9681 rows'
Wed 22 Oct 16:40:59 BST 2025 :: Data for Francisella tularensis has 9681 rows
+ '[' 9681 -gt 1000 ']'
+ printf 'Francisella tularensis\t9681\n'
+ schemist split /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Francisella-tularensis/full.csv --type scaffold --train 0.7 --test 0.15 --seed 0 --output /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Francisella-tularensis/scaffold-split.csv
🚀 Splitting table with the following parameters:
subcommand: split
output: <_io.TextIOWrapper name='/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Francisella-tularensis/scaffold-split.csv' mode='w' encoding='UTF-8'>
format: None
input: <_io.TextIOWrapper name='/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Francisella-tularensis/full.csv' mode='r' encoding='UTF-8'>
representation: SMILES
column: smiles
prefix: None
type: scaffold
train: 0.7
test: 0.15
seed: 0
func: <function _split at 0x7fb3ce59a170>
0it [00:00, ?it/s] 1it [00:00, 1.79it/s] 2it [00:01, 1.75it/s] 3it [00:01, 1.80it/s] 4it [00:02, 1.83it/s] 5it [00:02, 1.81it/s] 6it [00:03, 1.81it/s] 7it [00:03, 1.81it/s] 8it [00:04, 1.81it/s] 9it [00:04, 1.81it/s] 10it [00:05, 2.04it/s] 10it [00:05, 1.88it/s]
0it [00:00, ?it/s] 2it [00:00, 16.40it/s] 4it [00:00, 17.03it/s] 6it [00:00, 16.84it/s] 8it [00:00, 16.68it/s] 10it [00:00, 17.73it/s] 10it [00:00, 17.28it/s]
Split counts:
train: 6777
test: 1453
validation: 1451
⏰ Completed process in 0:00:05.960300
+ for split in "train" "test" "validation"
+ pandas '.query("is_train")'
+ local 'cmd=.query("is_train")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_train").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Francisella-tularensis/scaffold-split-train.csv
+ for split in "train" "test" "validation"
+ pandas '.query("is_test")'
+ local 'cmd=.query("is_test")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_test").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Francisella-tularensis/scaffold-split-test.csv
+ for split in "train" "test" "validation"
+ pandas '.query("is_validation")'
+ local 'cmd=.query("is_validation")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_validation").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Francisella-tularensis/scaffold-split-validation.csv
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Francisella-tularensis/scaffold-split.csv
+ rm /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Francisella-tularensis/full.csv
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Haemophilus influenzae' ']'
+ species_safe=Haemophilus-influenzae
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Haemophilus-influenzae
+ logger 'Processing Haemophilus influenzae...'
+ local 'message=Processing Haemophilus influenzae...'
++ date
+ local '_date=Wed 22 Oct 16:41:08 BST 2025'
+ local 'prefix=Wed 22 Oct 16:41:08 BST 2025'
+ echo 'Wed 22 Oct 16:41:08 BST 2025 :: Processing Haemophilus influenzae...'
Wed 22 Oct 16:41:08 BST 2025 :: Processing Haemophilus influenzae...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Haemophilus-influenzae
+ pandas '; species = "Haemophilus influenzae"; df.query("species == @species")' ,
+ local 'cmd=; species = "Haemophilus influenzae"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Haemophilus influenzae"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Haemophilus-influenzae/full.csv
++ wc -l
+ data_size=244
+ logger 'Data for Haemophilus influenzae has 244 rows'
+ local 'message=Data for Haemophilus influenzae has 244 rows'
++ date
+ local '_date=Wed 22 Oct 16:41:09 BST 2025'
+ local 'prefix=Wed 22 Oct 16:41:09 BST 2025'
+ echo 'Wed 22 Oct 16:41:09 BST 2025 :: Data for Haemophilus influenzae has 244 rows'
Wed 22 Oct 16:41:09 BST 2025 :: Data for Haemophilus influenzae has 244 rows
+ '[' 244 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Haemophilus-influenzae
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Klebsiella aerogenes' ']'
+ species_safe=Klebsiella-aerogenes
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Klebsiella-aerogenes
+ logger 'Processing Klebsiella aerogenes...'
+ local 'message=Processing Klebsiella aerogenes...'
++ date
+ local '_date=Wed 22 Oct 16:41:09 BST 2025'
+ local 'prefix=Wed 22 Oct 16:41:09 BST 2025'
+ echo 'Wed 22 Oct 16:41:09 BST 2025 :: Processing Klebsiella aerogenes...'
Wed 22 Oct 16:41:09 BST 2025 :: Processing Klebsiella aerogenes...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Klebsiella-aerogenes
+ pandas '; species = "Klebsiella aerogenes"; df.query("species == @species")' ,
+ local 'cmd=; species = "Klebsiella aerogenes"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Klebsiella aerogenes"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Klebsiella-aerogenes/full.csv
++ wc -l
+ data_size=137
+ logger 'Data for Klebsiella aerogenes has 137 rows'
+ local 'message=Data for Klebsiella aerogenes has 137 rows'
++ date
+ local '_date=Wed 22 Oct 16:41:11 BST 2025'
+ local 'prefix=Wed 22 Oct 16:41:11 BST 2025'
+ echo 'Wed 22 Oct 16:41:11 BST 2025 :: Data for Klebsiella aerogenes has 137 rows'
Wed 22 Oct 16:41:11 BST 2025 :: Data for Klebsiella aerogenes has 137 rows
+ '[' 137 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Klebsiella-aerogenes
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Klebsiella oxytoca' ']'
+ species_safe=Klebsiella-oxytoca
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Klebsiella-oxytoca
+ logger 'Processing Klebsiella oxytoca...'
+ local 'message=Processing Klebsiella oxytoca...'
++ date
+ local '_date=Wed 22 Oct 16:41:11 BST 2025'
+ local 'prefix=Wed 22 Oct 16:41:11 BST 2025'
+ echo 'Wed 22 Oct 16:41:11 BST 2025 :: Processing Klebsiella oxytoca...'
Wed 22 Oct 16:41:11 BST 2025 :: Processing Klebsiella oxytoca...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Klebsiella-oxytoca
+ pandas '; species = "Klebsiella oxytoca"; df.query("species == @species")' ,
+ local 'cmd=; species = "Klebsiella oxytoca"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Klebsiella oxytoca"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Klebsiella-oxytoca/full.csv
++ wc -l
+ data_size=21
+ logger 'Data for Klebsiella oxytoca has 21 rows'
+ local 'message=Data for Klebsiella oxytoca has 21 rows'
++ date
+ local '_date=Wed 22 Oct 16:41:12 BST 2025'
+ local 'prefix=Wed 22 Oct 16:41:12 BST 2025'
+ echo 'Wed 22 Oct 16:41:12 BST 2025 :: Data for Klebsiella oxytoca has 21 rows'
Wed 22 Oct 16:41:12 BST 2025 :: Data for Klebsiella oxytoca has 21 rows
+ '[' 21 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Klebsiella-oxytoca
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Klebsiella pneumoniae' ']'
+ species_safe=Klebsiella-pneumoniae
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Klebsiella-pneumoniae
+ logger 'Processing Klebsiella pneumoniae...'
+ local 'message=Processing Klebsiella pneumoniae...'
++ date
+ local '_date=Wed 22 Oct 16:41:12 BST 2025'
+ local 'prefix=Wed 22 Oct 16:41:12 BST 2025'
+ echo 'Wed 22 Oct 16:41:12 BST 2025 :: Processing Klebsiella pneumoniae...'
Wed 22 Oct 16:41:12 BST 2025 :: Processing Klebsiella pneumoniae...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Klebsiella-pneumoniae
+ pandas '; species = "Klebsiella pneumoniae"; df.query("species == @species")' ,
+ local 'cmd=; species = "Klebsiella pneumoniae"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Klebsiella pneumoniae"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Klebsiella-pneumoniae/full.csv
++ wc -l
+ data_size=6306
+ logger 'Data for Klebsiella pneumoniae has 6306 rows'
+ local 'message=Data for Klebsiella pneumoniae has 6306 rows'
++ date
+ local '_date=Wed 22 Oct 16:41:13 BST 2025'
+ local 'prefix=Wed 22 Oct 16:41:13 BST 2025'
+ echo 'Wed 22 Oct 16:41:13 BST 2025 :: Data for Klebsiella pneumoniae has 6306 rows'
Wed 22 Oct 16:41:13 BST 2025 :: Data for Klebsiella pneumoniae has 6306 rows
+ '[' 6306 -gt 1000 ']'
+ printf 'Klebsiella pneumoniae\t6306\n'
+ schemist split /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Klebsiella-pneumoniae/full.csv --type scaffold --train 0.7 --test 0.15 --seed 0 --output /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Klebsiella-pneumoniae/scaffold-split.csv
🚀 Splitting table with the following parameters:
subcommand: split
output: <_io.TextIOWrapper name='/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Klebsiella-pneumoniae/scaffold-split.csv' mode='w' encoding='UTF-8'>
format: None
input: <_io.TextIOWrapper name='/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Klebsiella-pneumoniae/full.csv' mode='r' encoding='UTF-8'>
representation: SMILES
column: smiles
prefix: None
type: scaffold
train: 0.7
test: 0.15
seed: 0
func: <function _split at 0x7f3474546170>
0it [00:00, ?it/s] 1it [00:00, 1.50it/s] 2it [00:01, 1.40it/s] 3it [00:02, 1.51it/s] 4it [00:02, 1.57it/s] 5it [00:03, 1.59it/s] 6it [00:04, 1.27it/s] 7it [00:04, 1.70it/s] 7it [00:04, 1.56it/s]
0it [00:00, ?it/s] 3it [00:00, 21.25it/s] 6it [00:00, 21.18it/s] 7it [00:00, 23.34it/s]
Split counts:
train: 4415
test: 946
validation: 945
⏰ Completed process in 0:00:04.834648
+ for split in "train" "test" "validation"
+ pandas '.query("is_train")'
+ local 'cmd=.query("is_train")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_train").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Klebsiella-pneumoniae/scaffold-split-train.csv
+ for split in "train" "test" "validation"
+ pandas '.query("is_test")'
+ local 'cmd=.query("is_test")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_test").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Klebsiella-pneumoniae/scaffold-split-test.csv
+ for split in "train" "test" "validation"
+ pandas '.query("is_validation")'
+ local 'cmd=.query("is_validation")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_validation").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Klebsiella-pneumoniae/scaffold-split-validation.csv
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Klebsiella-pneumoniae/scaffold-split.csv
+ rm /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Klebsiella-pneumoniae/full.csv
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Kocuria rhizophila' ']'
+ species_safe=Kocuria-rhizophila
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Kocuria-rhizophila
+ logger 'Processing Kocuria rhizophila...'
+ local 'message=Processing Kocuria rhizophila...'
++ date
+ local '_date=Wed 22 Oct 16:41:21 BST 2025'
+ local 'prefix=Wed 22 Oct 16:41:21 BST 2025'
+ echo 'Wed 22 Oct 16:41:21 BST 2025 :: Processing Kocuria rhizophila...'
Wed 22 Oct 16:41:21 BST 2025 :: Processing Kocuria rhizophila...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Kocuria-rhizophila
+ pandas '; species = "Kocuria rhizophila"; df.query("species == @species")' ,
+ local 'cmd=; species = "Kocuria rhizophila"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Kocuria rhizophila"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Kocuria-rhizophila/full.csv
++ wc -l
+ data_size=4
+ logger 'Data for Kocuria rhizophila has 4 rows'
+ local 'message=Data for Kocuria rhizophila has 4 rows'
++ date
+ local '_date=Wed 22 Oct 16:41:22 BST 2025'
+ local 'prefix=Wed 22 Oct 16:41:22 BST 2025'
+ echo 'Wed 22 Oct 16:41:22 BST 2025 :: Data for Kocuria rhizophila has 4 rows'
Wed 22 Oct 16:41:22 BST 2025 :: Data for Kocuria rhizophila has 4 rows
+ '[' 4 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Kocuria-rhizophila
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Micrococcus luteus' ']'
+ species_safe=Micrococcus-luteus
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Micrococcus-luteus
+ logger 'Processing Micrococcus luteus...'
+ local 'message=Processing Micrococcus luteus...'
++ date
+ local '_date=Wed 22 Oct 16:41:22 BST 2025'
+ local 'prefix=Wed 22 Oct 16:41:22 BST 2025'
+ echo 'Wed 22 Oct 16:41:22 BST 2025 :: Processing Micrococcus luteus...'
Wed 22 Oct 16:41:22 BST 2025 :: Processing Micrococcus luteus...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Micrococcus-luteus
+ pandas '; species = "Micrococcus luteus"; df.query("species == @species")' ,
+ local 'cmd=; species = "Micrococcus luteus"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Micrococcus luteus"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Micrococcus-luteus/full.csv
++ wc -l
+ data_size=15
+ logger 'Data for Micrococcus luteus has 15 rows'
+ local 'message=Data for Micrococcus luteus has 15 rows'
++ date
+ local '_date=Wed 22 Oct 16:41:24 BST 2025'
+ local 'prefix=Wed 22 Oct 16:41:24 BST 2025'
+ echo 'Wed 22 Oct 16:41:24 BST 2025 :: Data for Micrococcus luteus has 15 rows'
Wed 22 Oct 16:41:24 BST 2025 :: Data for Micrococcus luteus has 15 rows
+ '[' 15 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Micrococcus-luteus
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Moraxella catarrhalis' ']'
+ species_safe=Moraxella-catarrhalis
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Moraxella-catarrhalis
+ logger 'Processing Moraxella catarrhalis...'
+ local 'message=Processing Moraxella catarrhalis...'
++ date
+ local '_date=Wed 22 Oct 16:41:24 BST 2025'
+ local 'prefix=Wed 22 Oct 16:41:24 BST 2025'
+ echo 'Wed 22 Oct 16:41:24 BST 2025 :: Processing Moraxella catarrhalis...'
Wed 22 Oct 16:41:24 BST 2025 :: Processing Moraxella catarrhalis...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Moraxella-catarrhalis
+ pandas '; species = "Moraxella catarrhalis"; df.query("species == @species")' ,
+ local 'cmd=; species = "Moraxella catarrhalis"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Moraxella catarrhalis"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Moraxella-catarrhalis/full.csv
++ wc -l
+ data_size=36
+ logger 'Data for Moraxella catarrhalis has 36 rows'
+ local 'message=Data for Moraxella catarrhalis has 36 rows'
++ date
+ local '_date=Wed 22 Oct 16:41:25 BST 2025'
+ local 'prefix=Wed 22 Oct 16:41:25 BST 2025'
+ echo 'Wed 22 Oct 16:41:25 BST 2025 :: Data for Moraxella catarrhalis has 36 rows'
Wed 22 Oct 16:41:25 BST 2025 :: Data for Moraxella catarrhalis has 36 rows
+ '[' 36 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Moraxella-catarrhalis
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Morganella morganii' ']'
+ species_safe=Morganella-morganii
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Morganella-morganii
+ logger 'Processing Morganella morganii...'
+ local 'message=Processing Morganella morganii...'
++ date
+ local '_date=Wed 22 Oct 16:41:25 BST 2025'
+ local 'prefix=Wed 22 Oct 16:41:25 BST 2025'
+ echo 'Wed 22 Oct 16:41:25 BST 2025 :: Processing Morganella morganii...'
Wed 22 Oct 16:41:25 BST 2025 :: Processing Morganella morganii...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Morganella-morganii
+ pandas '; species = "Morganella morganii"; df.query("species == @species")' ,
+ local 'cmd=; species = "Morganella morganii"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Morganella morganii"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Morganella-morganii/full.csv
++ wc -l
+ data_size=18
+ logger 'Data for Morganella morganii has 18 rows'
+ local 'message=Data for Morganella morganii has 18 rows'
++ date
+ local '_date=Wed 22 Oct 16:41:26 BST 2025'
+ local 'prefix=Wed 22 Oct 16:41:26 BST 2025'
+ echo 'Wed 22 Oct 16:41:26 BST 2025 :: Data for Morganella morganii has 18 rows'
Wed 22 Oct 16:41:26 BST 2025 :: Data for Morganella morganii has 18 rows
+ '[' 18 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Morganella-morganii
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Mycobacterium tuberculosis' ']'
+ species_safe=Mycobacterium-tuberculosis
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Mycobacterium-tuberculosis
+ logger 'Processing Mycobacterium tuberculosis...'
+ local 'message=Processing Mycobacterium tuberculosis...'
++ date
+ local '_date=Wed 22 Oct 16:41:26 BST 2025'
+ local 'prefix=Wed 22 Oct 16:41:26 BST 2025'
+ echo 'Wed 22 Oct 16:41:26 BST 2025 :: Processing Mycobacterium tuberculosis...'
Wed 22 Oct 16:41:26 BST 2025 :: Processing Mycobacterium tuberculosis...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Mycobacterium-tuberculosis
+ pandas '; species = "Mycobacterium tuberculosis"; df.query("species == @species")' ,
+ local 'cmd=; species = "Mycobacterium tuberculosis"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Mycobacterium tuberculosis"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Mycobacterium-tuberculosis/full.csv
++ wc -l
+ data_size=138
+ logger 'Data for Mycobacterium tuberculosis has 138 rows'
+ local 'message=Data for Mycobacterium tuberculosis has 138 rows'
++ date
+ local '_date=Wed 22 Oct 16:41:28 BST 2025'
+ local 'prefix=Wed 22 Oct 16:41:28 BST 2025'
+ echo 'Wed 22 Oct 16:41:28 BST 2025 :: Data for Mycobacterium tuberculosis has 138 rows'
Wed 22 Oct 16:41:28 BST 2025 :: Data for Mycobacterium tuberculosis has 138 rows
+ '[' 138 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Mycobacterium-tuberculosis
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Mycobacterium vaccae' ']'
+ species_safe=Mycobacterium-vaccae
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Mycobacterium-vaccae
+ logger 'Processing Mycobacterium vaccae...'
+ local 'message=Processing Mycobacterium vaccae...'
++ date
+ local '_date=Wed 22 Oct 16:41:28 BST 2025'
+ local 'prefix=Wed 22 Oct 16:41:28 BST 2025'
+ echo 'Wed 22 Oct 16:41:28 BST 2025 :: Processing Mycobacterium vaccae...'
Wed 22 Oct 16:41:28 BST 2025 :: Processing Mycobacterium vaccae...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Mycobacterium-vaccae
+ pandas '; species = "Mycobacterium vaccae"; df.query("species == @species")' ,
+ local 'cmd=; species = "Mycobacterium vaccae"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Mycobacterium vaccae"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Mycobacterium-vaccae/full.csv
++ wc -l
+ data_size=7
+ logger 'Data for Mycobacterium vaccae has 7 rows'
+ local 'message=Data for Mycobacterium vaccae has 7 rows'
++ date
+ local '_date=Wed 22 Oct 16:41:29 BST 2025'
+ local 'prefix=Wed 22 Oct 16:41:29 BST 2025'
+ echo 'Wed 22 Oct 16:41:29 BST 2025 :: Data for Mycobacterium vaccae has 7 rows'
Wed 22 Oct 16:41:29 BST 2025 :: Data for Mycobacterium vaccae has 7 rows
+ '[' 7 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Mycobacterium-vaccae
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Neisseria gonorrhoeae' ']'
+ species_safe=Neisseria-gonorrhoeae
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Neisseria-gonorrhoeae
+ logger 'Processing Neisseria gonorrhoeae...'
+ local 'message=Processing Neisseria gonorrhoeae...'
++ date
+ local '_date=Wed 22 Oct 16:41:29 BST 2025'
+ local 'prefix=Wed 22 Oct 16:41:29 BST 2025'
+ echo 'Wed 22 Oct 16:41:29 BST 2025 :: Processing Neisseria gonorrhoeae...'
Wed 22 Oct 16:41:29 BST 2025 :: Processing Neisseria gonorrhoeae...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Neisseria-gonorrhoeae
+ pandas '; species = "Neisseria gonorrhoeae"; df.query("species == @species")' ,
+ local 'cmd=; species = "Neisseria gonorrhoeae"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Neisseria gonorrhoeae"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Neisseria-gonorrhoeae/full.csv
++ wc -l
+ data_size=44
+ logger 'Data for Neisseria gonorrhoeae has 44 rows'
+ local 'message=Data for Neisseria gonorrhoeae has 44 rows'
++ date
+ local '_date=Wed 22 Oct 16:41:30 BST 2025'
+ local 'prefix=Wed 22 Oct 16:41:30 BST 2025'
+ echo 'Wed 22 Oct 16:41:30 BST 2025 :: Data for Neisseria gonorrhoeae has 44 rows'
Wed 22 Oct 16:41:30 BST 2025 :: Data for Neisseria gonorrhoeae has 44 rows
+ '[' 44 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Neisseria-gonorrhoeae
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Neisseria meningitidis' ']'
+ species_safe=Neisseria-meningitidis
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Neisseria-meningitidis
+ logger 'Processing Neisseria meningitidis...'
+ local 'message=Processing Neisseria meningitidis...'
++ date
+ local '_date=Wed 22 Oct 16:41:30 BST 2025'
+ local 'prefix=Wed 22 Oct 16:41:30 BST 2025'
+ echo 'Wed 22 Oct 16:41:30 BST 2025 :: Processing Neisseria meningitidis...'
Wed 22 Oct 16:41:30 BST 2025 :: Processing Neisseria meningitidis...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Neisseria-meningitidis
+ pandas '; species = "Neisseria meningitidis"; df.query("species == @species")' ,
+ local 'cmd=; species = "Neisseria meningitidis"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Neisseria meningitidis"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Neisseria-meningitidis/full.csv
++ wc -l
+ data_size=19
+ logger 'Data for Neisseria meningitidis has 19 rows'
+ local 'message=Data for Neisseria meningitidis has 19 rows'
++ date
+ local '_date=Wed 22 Oct 16:41:34 BST 2025'
+ local 'prefix=Wed 22 Oct 16:41:34 BST 2025'
+ echo 'Wed 22 Oct 16:41:34 BST 2025 :: Data for Neisseria meningitidis has 19 rows'
Wed 22 Oct 16:41:34 BST 2025 :: Data for Neisseria meningitidis has 19 rows
+ '[' 19 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Neisseria-meningitidis
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Proteus hauseri' ']'
+ species_safe=Proteus-hauseri
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Proteus-hauseri
+ logger 'Processing Proteus hauseri...'
+ local 'message=Processing Proteus hauseri...'
++ date
+ local '_date=Wed 22 Oct 16:41:34 BST 2025'
+ local 'prefix=Wed 22 Oct 16:41:34 BST 2025'
+ echo 'Wed 22 Oct 16:41:34 BST 2025 :: Processing Proteus hauseri...'
Wed 22 Oct 16:41:34 BST 2025 :: Processing Proteus hauseri...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Proteus-hauseri
+ pandas '; species = "Proteus hauseri"; df.query("species == @species")' ,
+ local 'cmd=; species = "Proteus hauseri"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Proteus hauseri"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Proteus-hauseri/full.csv
++ wc -l
+ data_size=4
+ logger 'Data for Proteus hauseri has 4 rows'
+ local 'message=Data for Proteus hauseri has 4 rows'
++ date
+ local '_date=Wed 22 Oct 16:41:38 BST 2025'
+ local 'prefix=Wed 22 Oct 16:41:38 BST 2025'
+ echo 'Wed 22 Oct 16:41:38 BST 2025 :: Data for Proteus hauseri has 4 rows'
Wed 22 Oct 16:41:38 BST 2025 :: Data for Proteus hauseri has 4 rows
+ '[' 4 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Proteus-hauseri
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Proteus mirabilis' ']'
+ species_safe=Proteus-mirabilis
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Proteus-mirabilis
+ logger 'Processing Proteus mirabilis...'
+ local 'message=Processing Proteus mirabilis...'
++ date
+ local '_date=Wed 22 Oct 16:41:38 BST 2025'
+ local 'prefix=Wed 22 Oct 16:41:38 BST 2025'
+ echo 'Wed 22 Oct 16:41:38 BST 2025 :: Processing Proteus mirabilis...'
Wed 22 Oct 16:41:38 BST 2025 :: Processing Proteus mirabilis...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Proteus-mirabilis
+ pandas '; species = "Proteus mirabilis"; df.query("species == @species")' ,
+ local 'cmd=; species = "Proteus mirabilis"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Proteus mirabilis"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Proteus-mirabilis/full.csv
++ wc -l
+ data_size=30
+ logger 'Data for Proteus mirabilis has 30 rows'
+ local 'message=Data for Proteus mirabilis has 30 rows'
++ date
+ local '_date=Wed 22 Oct 16:41:41 BST 2025'
+ local 'prefix=Wed 22 Oct 16:41:41 BST 2025'
+ echo 'Wed 22 Oct 16:41:41 BST 2025 :: Data for Proteus mirabilis has 30 rows'
Wed 22 Oct 16:41:41 BST 2025 :: Data for Proteus mirabilis has 30 rows
+ '[' 30 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Proteus-mirabilis
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Providencia stuartii' ']'
+ species_safe=Providencia-stuartii
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Providencia-stuartii
+ logger 'Processing Providencia stuartii...'
+ local 'message=Processing Providencia stuartii...'
++ date
+ local '_date=Wed 22 Oct 16:41:41 BST 2025'
+ local 'prefix=Wed 22 Oct 16:41:41 BST 2025'
+ echo 'Wed 22 Oct 16:41:41 BST 2025 :: Processing Providencia stuartii...'
Wed 22 Oct 16:41:41 BST 2025 :: Processing Providencia stuartii...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Providencia-stuartii
+ pandas '; species = "Providencia stuartii"; df.query("species == @species")' ,
+ local 'cmd=; species = "Providencia stuartii"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Providencia stuartii"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Providencia-stuartii/full.csv
++ wc -l
+ data_size=68
+ logger 'Data for Providencia stuartii has 68 rows'
+ local 'message=Data for Providencia stuartii has 68 rows'
++ date
+ local '_date=Wed 22 Oct 16:41:42 BST 2025'
+ local 'prefix=Wed 22 Oct 16:41:42 BST 2025'
+ echo 'Wed 22 Oct 16:41:42 BST 2025 :: Data for Providencia stuartii has 68 rows'
Wed 22 Oct 16:41:42 BST 2025 :: Data for Providencia stuartii has 68 rows
+ '[' 68 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Providencia-stuartii
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Pseudomonas aeruginosa' ']'
+ species_safe=Pseudomonas-aeruginosa
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Pseudomonas-aeruginosa
+ logger 'Processing Pseudomonas aeruginosa...'
+ local 'message=Processing Pseudomonas aeruginosa...'
++ date
+ local '_date=Wed 22 Oct 16:41:42 BST 2025'
+ local 'prefix=Wed 22 Oct 16:41:42 BST 2025'
+ echo 'Wed 22 Oct 16:41:42 BST 2025 :: Processing Pseudomonas aeruginosa...'
Wed 22 Oct 16:41:42 BST 2025 :: Processing Pseudomonas aeruginosa...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Pseudomonas-aeruginosa
+ pandas '; species = "Pseudomonas aeruginosa"; df.query("species == @species")' ,
+ local 'cmd=; species = "Pseudomonas aeruginosa"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Pseudomonas aeruginosa"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Pseudomonas-aeruginosa/full.csv
++ wc -l
+ data_size=37260
+ logger 'Data for Pseudomonas aeruginosa has 37260 rows'
+ local 'message=Data for Pseudomonas aeruginosa has 37260 rows'
++ date
+ local '_date=Wed 22 Oct 16:41:44 BST 2025'
+ local 'prefix=Wed 22 Oct 16:41:44 BST 2025'
+ echo 'Wed 22 Oct 16:41:44 BST 2025 :: Data for Pseudomonas aeruginosa has 37260 rows'
Wed 22 Oct 16:41:44 BST 2025 :: Data for Pseudomonas aeruginosa has 37260 rows
+ '[' 37260 -gt 1000 ']'
+ printf 'Pseudomonas aeruginosa\t37260\n'
+ schemist split /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Pseudomonas-aeruginosa/full.csv --type scaffold --train 0.7 --test 0.15 --seed 0 --output /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Pseudomonas-aeruginosa/scaffold-split.csv
🚀 Splitting table with the following parameters:
subcommand: split
output: <_io.TextIOWrapper name='/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Pseudomonas-aeruginosa/scaffold-split.csv' mode='w' encoding='UTF-8'>
format: None
input: <_io.TextIOWrapper name='/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Pseudomonas-aeruginosa/full.csv' mode='r' encoding='UTF-8'>
representation: SMILES
column: smiles
prefix: None
type: scaffold
train: 0.7
test: 0.15
seed: 0
func: <function _split at 0x7f44737d6170>
0it [00:00, ?it/s] 1it [00:00, 1.01it/s] 2it [00:01, 1.15it/s] 3it [00:02, 1.33it/s] 4it [00:03, 1.23it/s] 5it [00:03, 1.31it/s] 6it [00:04, 1.41it/s] 7it [00:05, 1.54it/s] 8it [00:05, 1.61it/s] 9it [00:06, 1.57it/s] 10it [00:06, 1.67it/s] 11it [00:07, 1.70it/s] 12it [00:07, 1.73it/s] 13it [00:08, 1.66it/s] 14it [00:09, 1.74it/s] 15it [00:09, 1.76it/s] 16it [00:10, 1.69it/s] 17it [00:10, 1.75it/s] 18it [00:11, 1.77it/s] 19it [00:11, 1.80it/s] 20it [00:12, 1.71it/s] 21it [00:13, 1.81it/s] 22it [00:13, 1.80it/s] 23it [00:14, 1.72it/s] 24it [00:14, 1.78it/s] 25it [00:15, 1.80it/s] 26it [00:15, 1.76it/s] 27it [00:16, 1.73it/s] 28it [00:17, 1.78it/s] 29it [00:17, 1.77it/s] 30it [00:18, 1.69it/s] 31it [00:18, 1.72it/s] 32it [00:19, 1.71it/s] 33it [00:20, 1.69it/s] 34it [00:20, 1.62it/s] 35it [00:21, 1.37it/s] 36it [00:22, 1.42it/s] 37it [00:22, 1.49it/s] 38it [00:23, 1.91it/s] 38it [00:23, 1.64it/s]
0it [00:00, ?it/s] 1it [00:00, 5.76it/s] 2it [00:00, 5.82it/s] 3it [00:00, 5.77it/s] 4it [00:00, 5.74it/s] 5it [00:00, 5.81it/s] 6it [00:01, 6.05it/s] 7it [00:01, 6.51it/s] 8it [00:01, 6.78it/s] 9it [00:01, 6.91it/s] 10it [00:01, 6.94it/s] 11it [00:01, 7.22it/s] 12it [00:01, 7.13it/s] 13it [00:01, 7.04it/s] 14it [00:02, 7.15it/s] 15it [00:02, 7.42it/s] 16it [00:02, 7.31it/s] 17it [00:02, 7.28it/s] 18it [00:02, 7.50it/s] 19it [00:02, 7.66it/s] 20it [00:02, 7.43it/s] 21it [00:03, 7.39it/s] 22it [00:03, 7.50it/s] 23it [00:03, 7.41it/s] 24it [00:03, 7.31it/s] 25it [00:03, 7.59it/s] 26it [00:03, 7.40it/s] 27it [00:03, 7.37it/s] 28it [00:03, 7.43it/s] 29it [00:04, 7.23it/s] 30it [00:04, 7.26it/s] 31it [00:04, 7.16it/s] 32it [00:04, 7.22it/s] 33it [00:04, 7.22it/s] 34it [00:04, 6.91it/s] 35it [00:05, 6.54it/s] 36it [00:05, 6.61it/s] 37it [00:05, 6.45it/s] 38it [00:05, 7.07it/s]
Split counts:
train: 26082
test: 5589
validation: 5589
⏰ Completed process in 0:00:28.761054
+ for split in "train" "test" "validation"
+ pandas '.query("is_train")'
+ local 'cmd=.query("is_train")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_train").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Pseudomonas-aeruginosa/scaffold-split-train.csv
+ for split in "train" "test" "validation"
+ pandas '.query("is_test")'
+ local 'cmd=.query("is_test")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_test").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Pseudomonas-aeruginosa/scaffold-split-test.csv
+ for split in "train" "test" "validation"
+ pandas '.query("is_validation")'
+ local 'cmd=.query("is_validation")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_validation").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Pseudomonas-aeruginosa/scaffold-split-validation.csv
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Pseudomonas-aeruginosa/scaffold-split.csv
+ rm /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Pseudomonas-aeruginosa/full.csv
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Pseudomonas fluorescens' ']'
+ species_safe=Pseudomonas-fluorescens
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Pseudomonas-fluorescens
+ logger 'Processing Pseudomonas fluorescens...'
+ local 'message=Processing Pseudomonas fluorescens...'
++ date
+ local '_date=Wed 22 Oct 16:42:28 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:28 BST 2025'
+ echo 'Wed 22 Oct 16:42:28 BST 2025 :: Processing Pseudomonas fluorescens...'
Wed 22 Oct 16:42:28 BST 2025 :: Processing Pseudomonas fluorescens...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Pseudomonas-fluorescens
+ pandas '; species = "Pseudomonas fluorescens"; df.query("species == @species")' ,
+ local 'cmd=; species = "Pseudomonas fluorescens"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Pseudomonas fluorescens"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Pseudomonas-fluorescens/full.csv
++ wc -l
+ data_size=253
+ logger 'Data for Pseudomonas fluorescens has 253 rows'
+ local 'message=Data for Pseudomonas fluorescens has 253 rows'
++ date
+ local '_date=Wed 22 Oct 16:42:29 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:29 BST 2025'
+ echo 'Wed 22 Oct 16:42:29 BST 2025 :: Data for Pseudomonas fluorescens has 253 rows'
Wed 22 Oct 16:42:29 BST 2025 :: Data for Pseudomonas fluorescens has 253 rows
+ '[' 253 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Pseudomonas-fluorescens
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Pseudomonas pseudoalcaligenes' ']'
+ species_safe=Pseudomonas-pseudoalcaligenes
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Pseudomonas-pseudoalcaligenes
+ logger 'Processing Pseudomonas pseudoalcaligenes...'
+ local 'message=Processing Pseudomonas pseudoalcaligenes...'
++ date
+ local '_date=Wed 22 Oct 16:42:29 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:29 BST 2025'
+ echo 'Wed 22 Oct 16:42:29 BST 2025 :: Processing Pseudomonas pseudoalcaligenes...'
Wed 22 Oct 16:42:29 BST 2025 :: Processing Pseudomonas pseudoalcaligenes...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Pseudomonas-pseudoalcaligenes
+ pandas '; species = "Pseudomonas pseudoalcaligenes"; df.query("species == @species")' ,
+ local 'cmd=; species = "Pseudomonas pseudoalcaligenes"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Pseudomonas pseudoalcaligenes"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Pseudomonas-pseudoalcaligenes/full.csv
++ wc -l
+ data_size=21
+ logger 'Data for Pseudomonas pseudoalcaligenes has 21 rows'
+ local 'message=Data for Pseudomonas pseudoalcaligenes has 21 rows'
++ date
+ local '_date=Wed 22 Oct 16:42:31 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:31 BST 2025'
+ echo 'Wed 22 Oct 16:42:31 BST 2025 :: Data for Pseudomonas pseudoalcaligenes has 21 rows'
Wed 22 Oct 16:42:31 BST 2025 :: Data for Pseudomonas pseudoalcaligenes has 21 rows
+ '[' 21 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Pseudomonas-pseudoalcaligenes
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Pseudomonas syringae' ']'
+ species_safe=Pseudomonas-syringae
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Pseudomonas-syringae
+ logger 'Processing Pseudomonas syringae...'
+ local 'message=Processing Pseudomonas syringae...'
++ date
+ local '_date=Wed 22 Oct 16:42:31 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:31 BST 2025'
+ echo 'Wed 22 Oct 16:42:31 BST 2025 :: Processing Pseudomonas syringae...'
Wed 22 Oct 16:42:31 BST 2025 :: Processing Pseudomonas syringae...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Pseudomonas-syringae
+ pandas '; species = "Pseudomonas syringae"; df.query("species == @species")' ,
+ local 'cmd=; species = "Pseudomonas syringae"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Pseudomonas syringae"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Pseudomonas-syringae/full.csv
++ wc -l
+ data_size=16
+ logger 'Data for Pseudomonas syringae has 16 rows'
+ local 'message=Data for Pseudomonas syringae has 16 rows'
++ date
+ local '_date=Wed 22 Oct 16:42:32 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:32 BST 2025'
+ echo 'Wed 22 Oct 16:42:32 BST 2025 :: Data for Pseudomonas syringae has 16 rows'
Wed 22 Oct 16:42:32 BST 2025 :: Data for Pseudomonas syringae has 16 rows
+ '[' 16 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Pseudomonas-syringae
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Salmonella enterica serovar Typhimurium' ']'
+ species_safe=Salmonella-enterica-serovar-Typhimurium
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Salmonella-enterica-serovar-Typhimurium
+ logger 'Processing Salmonella enterica serovar Typhimurium...'
+ local 'message=Processing Salmonella enterica serovar Typhimurium...'
++ date
+ local '_date=Wed 22 Oct 16:42:32 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:32 BST 2025'
+ echo 'Wed 22 Oct 16:42:32 BST 2025 :: Processing Salmonella enterica serovar Typhimurium...'
Wed 22 Oct 16:42:32 BST 2025 :: Processing Salmonella enterica serovar Typhimurium...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Salmonella-enterica-serovar-Typhimurium
+ pandas '; species = "Salmonella enterica serovar Typhimurium"; df.query("species == @species")' ,
+ local 'cmd=; species = "Salmonella enterica serovar Typhimurium"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Salmonella enterica serovar Typhimurium"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Salmonella-enterica-serovar-Typhimurium/full.csv
++ wc -l
+ data_size=102
+ logger 'Data for Salmonella enterica serovar Typhimurium has 102 rows'
+ local 'message=Data for Salmonella enterica serovar Typhimurium has 102 rows'
++ date
+ local '_date=Wed 22 Oct 16:42:33 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:33 BST 2025'
+ echo 'Wed 22 Oct 16:42:33 BST 2025 :: Data for Salmonella enterica serovar Typhimurium has 102 rows'
Wed 22 Oct 16:42:33 BST 2025 :: Data for Salmonella enterica serovar Typhimurium has 102 rows
+ '[' 102 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Salmonella-enterica-serovar-Typhimurium
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Salmonella enterica subsp. enterica' ']'
+ species_safe=Salmonella-enterica-subsp.-enterica
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Salmonella-enterica-subsp.-enterica
+ logger 'Processing Salmonella enterica subsp. enterica...'
+ local 'message=Processing Salmonella enterica subsp. enterica...'
++ date
+ local '_date=Wed 22 Oct 16:42:33 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:33 BST 2025'
+ echo 'Wed 22 Oct 16:42:33 BST 2025 :: Processing Salmonella enterica subsp. enterica...'
Wed 22 Oct 16:42:33 BST 2025 :: Processing Salmonella enterica subsp. enterica...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Salmonella-enterica-subsp.-enterica
+ pandas '; species = "Salmonella enterica subsp. enterica"; df.query("species == @species")' ,
+ local 'cmd=; species = "Salmonella enterica subsp. enterica"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Salmonella enterica subsp. enterica"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Salmonella-enterica-subsp.-enterica/full.csv
++ wc -l
+ data_size=13
+ logger 'Data for Salmonella enterica subsp. enterica has 13 rows'
+ local 'message=Data for Salmonella enterica subsp. enterica has 13 rows'
++ date
+ local '_date=Wed 22 Oct 16:42:35 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:35 BST 2025'
+ echo 'Wed 22 Oct 16:42:35 BST 2025 :: Data for Salmonella enterica subsp. enterica has 13 rows'
Wed 22 Oct 16:42:35 BST 2025 :: Data for Salmonella enterica subsp. enterica has 13 rows
+ '[' 13 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Salmonella-enterica-subsp.-enterica
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Salmonella typhimurium' ']'
+ species_safe=Salmonella-typhimurium
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Salmonella-typhimurium
+ logger 'Processing Salmonella typhimurium...'
+ local 'message=Processing Salmonella typhimurium...'
++ date
+ local '_date=Wed 22 Oct 16:42:35 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:35 BST 2025'
+ echo 'Wed 22 Oct 16:42:35 BST 2025 :: Processing Salmonella typhimurium...'
Wed 22 Oct 16:42:35 BST 2025 :: Processing Salmonella typhimurium...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Salmonella-typhimurium
+ pandas '; species = "Salmonella typhimurium"; df.query("species == @species")' ,
+ local 'cmd=; species = "Salmonella typhimurium"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Salmonella typhimurium"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Salmonella-typhimurium/full.csv
++ wc -l
+ data_size=18
+ logger 'Data for Salmonella typhimurium has 18 rows'
+ local 'message=Data for Salmonella typhimurium has 18 rows'
++ date
+ local '_date=Wed 22 Oct 16:42:36 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:36 BST 2025'
+ echo 'Wed 22 Oct 16:42:36 BST 2025 :: Data for Salmonella typhimurium has 18 rows'
Wed 22 Oct 16:42:36 BST 2025 :: Data for Salmonella typhimurium has 18 rows
+ '[' 18 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Salmonella-typhimurium
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Serratia marcescens' ']'
+ species_safe=Serratia-marcescens
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Serratia-marcescens
+ logger 'Processing Serratia marcescens...'
+ local 'message=Processing Serratia marcescens...'
++ date
+ local '_date=Wed 22 Oct 16:42:36 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:36 BST 2025'
+ echo 'Wed 22 Oct 16:42:36 BST 2025 :: Processing Serratia marcescens...'
Wed 22 Oct 16:42:36 BST 2025 :: Processing Serratia marcescens...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Serratia-marcescens
+ pandas '; species = "Serratia marcescens"; df.query("species == @species")' ,
+ local 'cmd=; species = "Serratia marcescens"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Serratia marcescens"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Serratia-marcescens/full.csv
++ wc -l
+ data_size=14
+ logger 'Data for Serratia marcescens has 14 rows'
+ local 'message=Data for Serratia marcescens has 14 rows'
++ date
+ local '_date=Wed 22 Oct 16:42:37 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:37 BST 2025'
+ echo 'Wed 22 Oct 16:42:37 BST 2025 :: Data for Serratia marcescens has 14 rows'
Wed 22 Oct 16:42:37 BST 2025 :: Data for Serratia marcescens has 14 rows
+ '[' 14 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Serratia-marcescens
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Shigella boydii' ']'
+ species_safe=Shigella-boydii
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Shigella-boydii
+ logger 'Processing Shigella boydii...'
+ local 'message=Processing Shigella boydii...'
++ date
+ local '_date=Wed 22 Oct 16:42:37 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:37 BST 2025'
+ echo 'Wed 22 Oct 16:42:37 BST 2025 :: Processing Shigella boydii...'
Wed 22 Oct 16:42:37 BST 2025 :: Processing Shigella boydii...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Shigella-boydii
+ pandas '; species = "Shigella boydii"; df.query("species == @species")' ,
+ local 'cmd=; species = "Shigella boydii"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Shigella boydii"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Shigella-boydii/full.csv
++ wc -l
+ data_size=18
+ logger 'Data for Shigella boydii has 18 rows'
+ local 'message=Data for Shigella boydii has 18 rows'
++ date
+ local '_date=Wed 22 Oct 16:42:38 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:38 BST 2025'
+ echo 'Wed 22 Oct 16:42:38 BST 2025 :: Data for Shigella boydii has 18 rows'
Wed 22 Oct 16:42:38 BST 2025 :: Data for Shigella boydii has 18 rows
+ '[' 18 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Shigella-boydii
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Staphylococcus aureus' ']'
+ species_safe=Staphylococcus-aureus
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Staphylococcus-aureus
+ logger 'Processing Staphylococcus aureus...'
+ local 'message=Processing Staphylococcus aureus...'
++ date
+ local '_date=Wed 22 Oct 16:42:38 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:38 BST 2025'
+ echo 'Wed 22 Oct 16:42:38 BST 2025 :: Processing Staphylococcus aureus...'
Wed 22 Oct 16:42:38 BST 2025 :: Processing Staphylococcus aureus...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Staphylococcus-aureus
+ pandas '; species = "Staphylococcus aureus"; df.query("species == @species")' ,
+ local 'cmd=; species = "Staphylococcus aureus"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Staphylococcus aureus"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Staphylococcus-aureus/full.csv
++ wc -l
+ data_size=4024
+ logger 'Data for Staphylococcus aureus has 4024 rows'
+ local 'message=Data for Staphylococcus aureus has 4024 rows'
++ date
+ local '_date=Wed 22 Oct 16:42:40 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:40 BST 2025'
+ echo 'Wed 22 Oct 16:42:40 BST 2025 :: Data for Staphylococcus aureus has 4024 rows'
Wed 22 Oct 16:42:40 BST 2025 :: Data for Staphylococcus aureus has 4024 rows
+ '[' 4024 -gt 1000 ']'
+ printf 'Staphylococcus aureus\t4024\n'
+ schemist split /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Staphylococcus-aureus/full.csv --type scaffold --train 0.7 --test 0.15 --seed 0 --output /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Staphylococcus-aureus/scaffold-split.csv
🚀 Splitting table with the following parameters:
subcommand: split
output: <_io.TextIOWrapper name='/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Staphylococcus-aureus/scaffold-split.csv' mode='w' encoding='UTF-8'>
format: None
input: <_io.TextIOWrapper name='/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Staphylococcus-aureus/full.csv' mode='r' encoding='UTF-8'>
representation: SMILES
column: smiles
prefix: None
type: scaffold
train: 0.7
test: 0.15
seed: 0
func: <function _split at 0x7f7a6200e170>
0it [00:00, ?it/s] 1it [00:00, 1.25it/s] 2it [00:01, 1.43it/s] 3it [00:02, 1.43it/s] 4it [00:02, 1.38it/s] 5it [00:02, 1.70it/s]
0it [00:00, ?it/s] 3it [00:00, 24.50it/s] 5it [00:00, 29.65it/s]
Split counts:
train: 2817
test: 604
validation: 603
⏰ Completed process in 0:00:03.134467
+ for split in "train" "test" "validation"
+ pandas '.query("is_train")'
+ local 'cmd=.query("is_train")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_train").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Staphylococcus-aureus/scaffold-split-train.csv
+ for split in "train" "test" "validation"
+ pandas '.query("is_test")'
+ local 'cmd=.query("is_test")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_test").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Staphylococcus-aureus/scaffold-split-test.csv
+ for split in "train" "test" "validation"
+ pandas '.query("is_validation")'
+ local 'cmd=.query("is_validation")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_validation").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Staphylococcus-aureus/scaffold-split-validation.csv
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Staphylococcus-aureus/scaffold-split.csv
+ rm /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Staphylococcus-aureus/full.csv
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Staphylococcus capitis' ']'
+ species_safe=Staphylococcus-capitis
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Staphylococcus-capitis
+ logger 'Processing Staphylococcus capitis...'
+ local 'message=Processing Staphylococcus capitis...'
++ date
+ local '_date=Wed 22 Oct 16:42:47 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:47 BST 2025'
+ echo 'Wed 22 Oct 16:42:47 BST 2025 :: Processing Staphylococcus capitis...'
Wed 22 Oct 16:42:47 BST 2025 :: Processing Staphylococcus capitis...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Staphylococcus-capitis
+ pandas '; species = "Staphylococcus capitis"; df.query("species == @species")' ,
+ local 'cmd=; species = "Staphylococcus capitis"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Staphylococcus capitis"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Staphylococcus-capitis/full.csv
++ wc -l
+ data_size=16
+ logger 'Data for Staphylococcus capitis has 16 rows'
+ local 'message=Data for Staphylococcus capitis has 16 rows'
++ date
+ local '_date=Wed 22 Oct 16:42:48 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:48 BST 2025'
+ echo 'Wed 22 Oct 16:42:48 BST 2025 :: Data for Staphylococcus capitis has 16 rows'
Wed 22 Oct 16:42:48 BST 2025 :: Data for Staphylococcus capitis has 16 rows
+ '[' 16 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Staphylococcus-capitis
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Staphylococcus epidermidis' ']'
+ species_safe=Staphylococcus-epidermidis
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Staphylococcus-epidermidis
+ logger 'Processing Staphylococcus epidermidis...'
+ local 'message=Processing Staphylococcus epidermidis...'
++ date
+ local '_date=Wed 22 Oct 16:42:48 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:48 BST 2025'
+ echo 'Wed 22 Oct 16:42:48 BST 2025 :: Processing Staphylococcus epidermidis...'
Wed 22 Oct 16:42:48 BST 2025 :: Processing Staphylococcus epidermidis...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Staphylococcus-epidermidis
+ pandas '; species = "Staphylococcus epidermidis"; df.query("species == @species")' ,
+ local 'cmd=; species = "Staphylococcus epidermidis"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Staphylococcus epidermidis"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Staphylococcus-epidermidis/full.csv
++ wc -l
+ data_size=50
+ logger 'Data for Staphylococcus epidermidis has 50 rows'
+ local 'message=Data for Staphylococcus epidermidis has 50 rows'
++ date
+ local '_date=Wed 22 Oct 16:42:49 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:49 BST 2025'
+ echo 'Wed 22 Oct 16:42:49 BST 2025 :: Data for Staphylococcus epidermidis has 50 rows'
Wed 22 Oct 16:42:49 BST 2025 :: Data for Staphylococcus epidermidis has 50 rows
+ '[' 50 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Staphylococcus-epidermidis
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Staphylococcus heamolyticus' ']'
+ species_safe=Staphylococcus-heamolyticus
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Staphylococcus-heamolyticus
+ logger 'Processing Staphylococcus heamolyticus...'
+ local 'message=Processing Staphylococcus heamolyticus...'
++ date
+ local '_date=Wed 22 Oct 16:42:49 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:49 BST 2025'
+ echo 'Wed 22 Oct 16:42:49 BST 2025 :: Processing Staphylococcus heamolyticus...'
Wed 22 Oct 16:42:49 BST 2025 :: Processing Staphylococcus heamolyticus...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Staphylococcus-heamolyticus
+ pandas '; species = "Staphylococcus heamolyticus"; df.query("species == @species")' ,
+ local 'cmd=; species = "Staphylococcus heamolyticus"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Staphylococcus heamolyticus"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Staphylococcus-heamolyticus/full.csv
++ wc -l
+ data_size=9
+ logger 'Data for Staphylococcus heamolyticus has 9 rows'
+ local 'message=Data for Staphylococcus heamolyticus has 9 rows'
++ date
+ local '_date=Wed 22 Oct 16:42:51 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:51 BST 2025'
+ echo 'Wed 22 Oct 16:42:51 BST 2025 :: Data for Staphylococcus heamolyticus has 9 rows'
Wed 22 Oct 16:42:51 BST 2025 :: Data for Staphylococcus heamolyticus has 9 rows
+ '[' 9 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Staphylococcus-heamolyticus
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Stenotrophomonas maltophilia' ']'
+ species_safe=Stenotrophomonas-maltophilia
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Stenotrophomonas-maltophilia
+ logger 'Processing Stenotrophomonas maltophilia...'
+ local 'message=Processing Stenotrophomonas maltophilia...'
++ date
+ local '_date=Wed 22 Oct 16:42:51 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:51 BST 2025'
+ echo 'Wed 22 Oct 16:42:51 BST 2025 :: Processing Stenotrophomonas maltophilia...'
Wed 22 Oct 16:42:51 BST 2025 :: Processing Stenotrophomonas maltophilia...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Stenotrophomonas-maltophilia
+ pandas '; species = "Stenotrophomonas maltophilia"; df.query("species == @species")' ,
+ local 'cmd=; species = "Stenotrophomonas maltophilia"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Stenotrophomonas maltophilia"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Stenotrophomonas-maltophilia/full.csv
++ wc -l
+ data_size=86
+ logger 'Data for Stenotrophomonas maltophilia has 86 rows'
+ local 'message=Data for Stenotrophomonas maltophilia has 86 rows'
++ date
+ local '_date=Wed 22 Oct 16:42:52 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:52 BST 2025'
+ echo 'Wed 22 Oct 16:42:52 BST 2025 :: Data for Stenotrophomonas maltophilia has 86 rows'
Wed 22 Oct 16:42:52 BST 2025 :: Data for Stenotrophomonas maltophilia has 86 rows
+ '[' 86 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Stenotrophomonas-maltophilia
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Streptococcus agalactiae' ']'
+ species_safe=Streptococcus-agalactiae
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Streptococcus-agalactiae
+ logger 'Processing Streptococcus agalactiae...'
+ local 'message=Processing Streptococcus agalactiae...'
++ date
+ local '_date=Wed 22 Oct 16:42:52 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:52 BST 2025'
+ echo 'Wed 22 Oct 16:42:52 BST 2025 :: Processing Streptococcus agalactiae...'
Wed 22 Oct 16:42:52 BST 2025 :: Processing Streptococcus agalactiae...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Streptococcus-agalactiae
+ pandas '; species = "Streptococcus agalactiae"; df.query("species == @species")' ,
+ local 'cmd=; species = "Streptococcus agalactiae"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Streptococcus agalactiae"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Streptococcus-agalactiae/full.csv
++ wc -l
+ data_size=11
+ logger 'Data for Streptococcus agalactiae has 11 rows'
+ local 'message=Data for Streptococcus agalactiae has 11 rows'
++ date
+ local '_date=Wed 22 Oct 16:42:53 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:53 BST 2025'
+ echo 'Wed 22 Oct 16:42:53 BST 2025 :: Data for Streptococcus agalactiae has 11 rows'
Wed 22 Oct 16:42:53 BST 2025 :: Data for Streptococcus agalactiae has 11 rows
+ '[' 11 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Streptococcus-agalactiae
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Streptococcus bovis' ']'
+ species_safe=Streptococcus-bovis
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Streptococcus-bovis
+ logger 'Processing Streptococcus bovis...'
+ local 'message=Processing Streptococcus bovis...'
++ date
+ local '_date=Wed 22 Oct 16:42:53 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:53 BST 2025'
+ echo 'Wed 22 Oct 16:42:53 BST 2025 :: Processing Streptococcus bovis...'
Wed 22 Oct 16:42:53 BST 2025 :: Processing Streptococcus bovis...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Streptococcus-bovis
+ pandas '; species = "Streptococcus bovis"; df.query("species == @species")' ,
+ local 'cmd=; species = "Streptococcus bovis"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Streptococcus bovis"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Streptococcus-bovis/full.csv
++ wc -l
+ data_size=60
+ logger 'Data for Streptococcus bovis has 60 rows'
+ local 'message=Data for Streptococcus bovis has 60 rows'
++ date
+ local '_date=Wed 22 Oct 16:42:55 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:55 BST 2025'
+ echo 'Wed 22 Oct 16:42:55 BST 2025 :: Data for Streptococcus bovis has 60 rows'
Wed 22 Oct 16:42:55 BST 2025 :: Data for Streptococcus bovis has 60 rows
+ '[' 60 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Streptococcus-bovis
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Streptococcus oralis' ']'
+ species_safe=Streptococcus-oralis
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Streptococcus-oralis
+ logger 'Processing Streptococcus oralis...'
+ local 'message=Processing Streptococcus oralis...'
++ date
+ local '_date=Wed 22 Oct 16:42:55 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:55 BST 2025'
+ echo 'Wed 22 Oct 16:42:55 BST 2025 :: Processing Streptococcus oralis...'
Wed 22 Oct 16:42:55 BST 2025 :: Processing Streptococcus oralis...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Streptococcus-oralis
+ pandas '; species = "Streptococcus oralis"; df.query("species == @species")' ,
+ local 'cmd=; species = "Streptococcus oralis"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Streptococcus oralis"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Streptococcus-oralis/full.csv
++ wc -l
+ data_size=9
+ logger 'Data for Streptococcus oralis has 9 rows'
+ local 'message=Data for Streptococcus oralis has 9 rows'
++ date
+ local '_date=Wed 22 Oct 16:42:56 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:56 BST 2025'
+ echo 'Wed 22 Oct 16:42:56 BST 2025 :: Data for Streptococcus oralis has 9 rows'
Wed 22 Oct 16:42:56 BST 2025 :: Data for Streptococcus oralis has 9 rows
+ '[' 9 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Streptococcus-oralis
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Streptococcus pneumoniae' ']'
+ species_safe=Streptococcus-pneumoniae
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Streptococcus-pneumoniae
+ logger 'Processing Streptococcus pneumoniae...'
+ local 'message=Processing Streptococcus pneumoniae...'
++ date
+ local '_date=Wed 22 Oct 16:42:56 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:56 BST 2025'
+ echo 'Wed 22 Oct 16:42:56 BST 2025 :: Processing Streptococcus pneumoniae...'
Wed 22 Oct 16:42:56 BST 2025 :: Processing Streptococcus pneumoniae...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Streptococcus-pneumoniae
+ pandas '; species = "Streptococcus pneumoniae"; df.query("species == @species")' ,
+ local 'cmd=; species = "Streptococcus pneumoniae"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Streptococcus pneumoniae"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Streptococcus-pneumoniae/full.csv
++ wc -l
+ data_size=1540
+ logger 'Data for Streptococcus pneumoniae has 1540 rows'
+ local 'message=Data for Streptococcus pneumoniae has 1540 rows'
++ date
+ local '_date=Wed 22 Oct 16:42:57 BST 2025'
+ local 'prefix=Wed 22 Oct 16:42:57 BST 2025'
+ echo 'Wed 22 Oct 16:42:57 BST 2025 :: Data for Streptococcus pneumoniae has 1540 rows'
Wed 22 Oct 16:42:57 BST 2025 :: Data for Streptococcus pneumoniae has 1540 rows
+ '[' 1540 -gt 1000 ']'
+ printf 'Streptococcus pneumoniae\t1540\n'
+ schemist split /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Streptococcus-pneumoniae/full.csv --type scaffold --train 0.7 --test 0.15 --seed 0 --output /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Streptococcus-pneumoniae/scaffold-split.csv
🚀 Splitting table with the following parameters:
subcommand: split
output: <_io.TextIOWrapper name='/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Streptococcus-pneumoniae/scaffold-split.csv' mode='w' encoding='UTF-8'>
format: None
input: <_io.TextIOWrapper name='/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Streptococcus-pneumoniae/full.csv' mode='r' encoding='UTF-8'>
representation: SMILES
column: smiles
prefix: None
type: scaffold
train: 0.7
test: 0.15
seed: 0
func: <function _split at 0x7faaca062170>
0it [00:00, ?it/s] 1it [00:00, 1.41it/s] 2it [00:01, 1.95it/s] 2it [00:01, 1.84it/s]
0it [00:00, ?it/s] 2it [00:00, 39.22it/s]
Split counts:
train: 1078
test: 231
validation: 231
⏰ Completed process in 0:00:01.147444
+ for split in "train" "test" "validation"
+ pandas '.query("is_train")'
+ local 'cmd=.query("is_train")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_train").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Streptococcus-pneumoniae/scaffold-split-train.csv
+ for split in "train" "test" "validation"
+ pandas '.query("is_test")'
+ local 'cmd=.query("is_test")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_test").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Streptococcus-pneumoniae/scaffold-split-test.csv
+ for split in "train" "test" "validation"
+ pandas '.query("is_validation")'
+ local 'cmd=.query("is_validation")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_validation").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Streptococcus-pneumoniae/scaffold-split-validation.csv
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Streptococcus-pneumoniae/scaffold-split.csv
+ rm /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Streptococcus-pneumoniae/full.csv
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Streptococcus pyogenes' ']'
+ species_safe=Streptococcus-pyogenes
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Streptococcus-pyogenes
+ logger 'Processing Streptococcus pyogenes...'
+ local 'message=Processing Streptococcus pyogenes...'
++ date
+ local '_date=Wed 22 Oct 16:43:02 BST 2025'
+ local 'prefix=Wed 22 Oct 16:43:02 BST 2025'
+ echo 'Wed 22 Oct 16:43:02 BST 2025 :: Processing Streptococcus pyogenes...'
Wed 22 Oct 16:43:02 BST 2025 :: Processing Streptococcus pyogenes...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Streptococcus-pyogenes
+ pandas '; species = "Streptococcus pyogenes"; df.query("species == @species")' ,
+ local 'cmd=; species = "Streptococcus pyogenes"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Streptococcus pyogenes"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Streptococcus-pyogenes/full.csv
++ wc -l
+ data_size=251
+ logger 'Data for Streptococcus pyogenes has 251 rows'
+ local 'message=Data for Streptococcus pyogenes has 251 rows'
++ date
+ local '_date=Wed 22 Oct 16:43:03 BST 2025'
+ local 'prefix=Wed 22 Oct 16:43:03 BST 2025'
+ echo 'Wed 22 Oct 16:43:03 BST 2025 :: Data for Streptococcus pyogenes has 251 rows'
Wed 22 Oct 16:43:03 BST 2025 :: Data for Streptococcus pyogenes has 251 rows
+ '[' 251 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Streptococcus-pyogenes
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Vibrio cholerae' ']'
+ species_safe=Vibrio-cholerae
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Vibrio-cholerae
+ logger 'Processing Vibrio cholerae...'
+ local 'message=Processing Vibrio cholerae...'
++ date
+ local '_date=Wed 22 Oct 16:43:03 BST 2025'
+ local 'prefix=Wed 22 Oct 16:43:03 BST 2025'
+ echo 'Wed 22 Oct 16:43:03 BST 2025 :: Processing Vibrio cholerae...'
Wed 22 Oct 16:43:03 BST 2025 :: Processing Vibrio cholerae...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Vibrio-cholerae
+ pandas '; species = "Vibrio cholerae"; df.query("species == @species")' ,
+ local 'cmd=; species = "Vibrio cholerae"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Vibrio cholerae"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Vibrio-cholerae/full.csv
++ wc -l
+ data_size=27
+ logger 'Data for Vibrio cholerae has 27 rows'
+ local 'message=Data for Vibrio cholerae has 27 rows'
++ date
+ local '_date=Wed 22 Oct 16:43:05 BST 2025'
+ local 'prefix=Wed 22 Oct 16:43:05 BST 2025'
+ echo 'Wed 22 Oct 16:43:05 BST 2025 :: Data for Vibrio cholerae has 27 rows'
Wed 22 Oct 16:43:05 BST 2025 :: Data for Vibrio cholerae has 27 rows
+ '[' 27 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Vibrio-cholerae
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Yersinia enterocolitica' ']'
+ species_safe=Yersinia-enterocolitica
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Yersinia-enterocolitica
+ logger 'Processing Yersinia enterocolitica...'
+ local 'message=Processing Yersinia enterocolitica...'
++ date
+ local '_date=Wed 22 Oct 16:43:05 BST 2025'
+ local 'prefix=Wed 22 Oct 16:43:05 BST 2025'
+ echo 'Wed 22 Oct 16:43:05 BST 2025 :: Processing Yersinia enterocolitica...'
Wed 22 Oct 16:43:05 BST 2025 :: Processing Yersinia enterocolitica...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Yersinia-enterocolitica
+ pandas '; species = "Yersinia enterocolitica"; df.query("species == @species")' ,
+ local 'cmd=; species = "Yersinia enterocolitica"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Yersinia enterocolitica"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Yersinia-enterocolitica/full.csv
++ wc -l
+ data_size=1405
+ logger 'Data for Yersinia enterocolitica has 1405 rows'
+ local 'message=Data for Yersinia enterocolitica has 1405 rows'
++ date
+ local '_date=Wed 22 Oct 16:43:07 BST 2025'
+ local 'prefix=Wed 22 Oct 16:43:07 BST 2025'
+ echo 'Wed 22 Oct 16:43:07 BST 2025 :: Data for Yersinia enterocolitica has 1405 rows'
Wed 22 Oct 16:43:07 BST 2025 :: Data for Yersinia enterocolitica has 1405 rows
+ '[' 1405 -gt 1000 ']'
+ printf 'Yersinia enterocolitica\t1405\n'
+ schemist split /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Yersinia-enterocolitica/full.csv --type scaffold --train 0.7 --test 0.15 --seed 0 --output /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Yersinia-enterocolitica/scaffold-split.csv
🚀 Splitting table with the following parameters:
subcommand: split
output: <_io.TextIOWrapper name='/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Yersinia-enterocolitica/scaffold-split.csv' mode='w' encoding='UTF-8'>
format: None
input: <_io.TextIOWrapper name='/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Yersinia-enterocolitica/full.csv' mode='r' encoding='UTF-8'>
representation: SMILES
column: smiles
prefix: None
type: scaffold
train: 0.7
test: 0.15
seed: 0
func: <function _split at 0x7f7c53516170>
0it [00:00, ?it/s] 1it [00:00, 1.59it/s] 2it [00:00, 2.44it/s] 2it [00:00, 2.26it/s]
0it [00:00, ?it/s] 2it [00:00, 42.79it/s]
Split counts:
train: 984
test: 211
validation: 210
⏰ Completed process in 0:00:00.957709
+ for split in "train" "test" "validation"
+ pandas '.query("is_train")'
+ local 'cmd=.query("is_train")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_train").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Yersinia-enterocolitica/scaffold-split-train.csv
+ for split in "train" "test" "validation"
+ pandas '.query("is_test")'
+ local 'cmd=.query("is_test")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_test").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Yersinia-enterocolitica/scaffold-split-test.csv
+ for split in "train" "test" "validation"
+ pandas '.query("is_validation")'
+ local 'cmd=.query("is_validation")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_validation").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Yersinia-enterocolitica/scaffold-split-validation.csv
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Yersinia-enterocolitica/scaffold-split.csv
+ rm /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Yersinia-enterocolitica/full.csv
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Yersinia pestis' ']'
+ species_safe=Yersinia-pestis
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Yersinia-pestis
+ logger 'Processing Yersinia pestis...'
+ local 'message=Processing Yersinia pestis...'
++ date
+ local '_date=Wed 22 Oct 16:43:16 BST 2025'
+ local 'prefix=Wed 22 Oct 16:43:16 BST 2025'
+ echo 'Wed 22 Oct 16:43:16 BST 2025 :: Processing Yersinia pestis...'
Wed 22 Oct 16:43:16 BST 2025 :: Processing Yersinia pestis...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Yersinia-pestis
+ pandas '; species = "Yersinia pestis"; df.query("species == @species")' ,
+ local 'cmd=; species = "Yersinia pestis"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Yersinia pestis"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Yersinia-pestis/full.csv
++ wc -l
+ data_size=10003
+ logger 'Data for Yersinia pestis has 10003 rows'
+ local 'message=Data for Yersinia pestis has 10003 rows'
++ date
+ local '_date=Wed 22 Oct 16:43:18 BST 2025'
+ local 'prefix=Wed 22 Oct 16:43:18 BST 2025'
+ echo 'Wed 22 Oct 16:43:18 BST 2025 :: Data for Yersinia pestis has 10003 rows'
Wed 22 Oct 16:43:18 BST 2025 :: Data for Yersinia pestis has 10003 rows
+ '[' 10003 -gt 1000 ']'
+ printf 'Yersinia pestis\t10003\n'
+ schemist split /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Yersinia-pestis/full.csv --type scaffold --train 0.7 --test 0.15 --seed 0 --output /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Yersinia-pestis/scaffold-split.csv
🚀 Splitting table with the following parameters:
subcommand: split
output: <_io.TextIOWrapper name='/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Yersinia-pestis/scaffold-split.csv' mode='w' encoding='UTF-8'>
format: None
input: <_io.TextIOWrapper name='/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Yersinia-pestis/full.csv' mode='r' encoding='UTF-8'>
representation: SMILES
column: smiles
prefix: None
type: scaffold
train: 0.7
test: 0.15
seed: 0
func: <function _split at 0x7fadf3d5a170>
0it [00:00, ?it/s] 1it [00:00, 1.79it/s] 2it [00:01, 1.75it/s] 3it [00:01, 1.81it/s] 4it [00:02, 1.84it/s] 5it [00:02, 1.82it/s] 6it [00:03, 1.83it/s] 7it [00:03, 1.83it/s] 8it [00:04, 1.83it/s] 9it [00:04, 1.82it/s] 10it [00:05, 1.85it/s] 11it [00:05, 2.01it/s]
0it [00:00, ?it/s] 2it [00:00, 16.08it/s] 4it [00:00, 16.64it/s] 6it [00:00, 16.54it/s] 8it [00:00, 16.34it/s] 10it [00:00, 16.34it/s] 11it [00:00, 17.94it/s]
Split counts:
train: 7003
test: 1501
validation: 1499
⏰ Completed process in 0:00:06.152801
+ for split in "train" "test" "validation"
+ pandas '.query("is_train")'
+ local 'cmd=.query("is_train")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_train").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Yersinia-pestis/scaffold-split-train.csv
+ for split in "train" "test" "validation"
+ pandas '.query("is_test")'
+ local 'cmd=.query("is_test")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_test").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Yersinia-pestis/scaffold-split-test.csv
+ for split in "train" "test" "validation"
+ pandas '.query("is_validation")'
+ local 'cmd=.query("is_validation")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False).query("is_validation").to_csv(sys.stdout, index=False, sep=",")'
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Yersinia-pestis/scaffold-split-validation.csv
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Yersinia-pestis/scaffold-split.csv
+ rm /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Yersinia-pestis/full.csv
+ for species in "${unique_organisms[@]}"
+ '[' -n 'Yersinia pseudotuberculosis' ']'
+ species_safe=Yersinia-pseudotuberculosis
+ output_data_dir=/nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Yersinia-pseudotuberculosis
+ logger 'Processing Yersinia pseudotuberculosis...'
+ local 'message=Processing Yersinia pseudotuberculosis...'
++ date
+ local '_date=Wed 22 Oct 16:43:31 BST 2025'
+ local 'prefix=Wed 22 Oct 16:43:31 BST 2025'
+ echo 'Wed 22 Oct 16:43:31 BST 2025 :: Processing Yersinia pseudotuberculosis...'
Wed 22 Oct 16:43:31 BST 2025 :: Processing Yersinia pseudotuberculosis...
+ mkdir -p /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Yersinia-pseudotuberculosis
+ pandas '; species = "Yersinia pseudotuberculosis"; df.query("species == @species")' ,
+ local 'cmd=; species = "Yersinia pseudotuberculosis"; df.query("species == @species")'
+ local sep1=,
+ local idx=False
+ local sep2=,
+ python -c 'import sys; import pandas as pd; df = pd.read_csv(sys.stdin, sep=",", low_memory=False); species = "Yersinia pseudotuberculosis"; df.query("species == @species").to_csv(sys.stdout, index=False, sep=",")'
++ tail -n+2 /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Yersinia-pseudotuberculosis/full.csv
++ wc -l
+ data_size=16
+ logger 'Data for Yersinia pseudotuberculosis has 16 rows'
+ local 'message=Data for Yersinia pseudotuberculosis has 16 rows'
++ date
+ local '_date=Wed 22 Oct 16:43:32 BST 2025'
+ local 'prefix=Wed 22 Oct 16:43:32 BST 2025'
+ echo 'Wed 22 Oct 16:43:32 BST 2025 :: Data for Yersinia pseudotuberculosis has 16 rows'
Wed 22 Oct 16:43:32 BST 2025 :: Data for Yersinia pseudotuberculosis has 16 rows
+ '[' 16 -gt 1000 ']'
+ rm -r /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/Yersinia-pseudotuberculosis
+ gzip --best -f /nemo/lab/johnsone/home/users/johnsoe/data/datasets/spark/species-all-v2/spark-all.csv
+ set +x
Wed 22 Oct 16:43:33 BST 2025 :: Done!