Niksa Praljak commited on
Commit ·
8b00415
1
Parent(s): b2627ff
Add hardware details in README and make proteoscribe updates
Browse files- README.md +20 -0
- run_ProteoScribe_sample.py +1 -2
README.md
CHANGED
|
@@ -16,6 +16,26 @@ doi: https://doi.org/10.1101/2024.11.11.622734
|
|
| 16 |
|
| 17 |
[Read the paper on bioRxiv](https://www.biorxiv.org/content/10.1101/2024.11.11.622734v1)
|
| 18 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
## Software Requirements
|
| 20 |
|
| 21 |
### Required Dependencies
|
|
|
|
| 16 |
|
| 17 |
[Read the paper on bioRxiv](https://www.biorxiv.org/content/10.1101/2024.11.11.622734v1)
|
| 18 |
|
| 19 |
+
|
| 20 |
+
## Hardware Requirements and Testing Environment
|
| 21 |
+
|
| 22 |
+
This code has been tested on the following High-Performance Computing (HPC) environment:
|
| 23 |
+
|
| 24 |
+
### Hardware Specifications
|
| 25 |
+
- **CPU**: Intel(R) Xeon(R) Gold 6346 CPU @ 3.10GHz
|
| 26 |
+
- **CPU Cores**: 32 (2 NUMA nodes with 16 cores each)
|
| 27 |
+
- **GPU**: NVIDIA A100-PCIE-40GB
|
| 28 |
+
- **RAM**: 251GB
|
| 29 |
+
- **Operating System**: CentOS Linux 8
|
| 30 |
+
|
| 31 |
+
### Compute Environment
|
| 32 |
+
- **Job Scheduler**: Slurm
|
| 33 |
+
- **Allocation**:
|
| 34 |
+
- Number of nodes: 1
|
| 35 |
+
- CPUs per task: 12
|
| 36 |
+
- Memory per node: 93.7GB
|
| 37 |
+
- GPUs per node: 1 (A100)
|
| 38 |
+
|
| 39 |
## Software Requirements
|
| 40 |
|
| 41 |
### Required Dependencies
|
run_ProteoScribe_sample.py
CHANGED
|
@@ -152,8 +152,7 @@ if __name__ == '__main__':
|
|
| 152 |
config_args = convert_to_namespace(config_dict)
|
| 153 |
|
| 154 |
# Set device if not specified in config
|
| 155 |
-
|
| 156 |
-
config_args.device = 'cuda' if torch.cuda.is_available() else 'cpu'
|
| 157 |
|
| 158 |
# load test dataset
|
| 159 |
embedding_dataset = torch.load(config_args_parser.input_path)
|
|
|
|
| 152 |
config_args = convert_to_namespace(config_dict)
|
| 153 |
|
| 154 |
# Set device if not specified in config
|
| 155 |
+
config_args.device = 'cuda' if torch.cuda.is_available() else 'cpu'
|
|
|
|
| 156 |
|
| 157 |
# load test dataset
|
| 158 |
embedding_dataset = torch.load(config_args_parser.input_path)
|