| usage: RGCN.py [-h] [--seed SEED] |
| [--dataset {cora,cora_ml,citeseer,polblogs,pubmed,Flickr}] |
| [--ptb_rate PTB_RATE] |
| [--ptb_type {clean,meta,dice,minmax,pgd,random}] |
| [--hidden HIDDEN] [--dropout DROPOUT] [--gpu GPU] |
| RGCN.py: error: argument --dataset: invalid choice: 'Pubmed' (choose from 'cora', 'cora_ml', 'citeseer', 'polblogs', 'pubmed', 'Flickr') |
| cuda: True |
| Loading pubmed dataset... |
| Traceback (most recent call last): |
| File "/home/yiren/new_ssd2/chunhui/yaning/project/cgscore/examples/graph/cgscore_experiments/runsh/../defense_method/RGCN.py", line 86, in <module> |
| main() |
| File "/home/yiren/new_ssd2/chunhui/yaning/project/cgscore/examples/graph/cgscore_experiments/runsh/../defense_method/RGCN.py", line 64, in main |
| acc = test_rgcn(perturbed_adj) |
| File "/home/yiren/new_ssd2/chunhui/yaning/project/cgscore/examples/graph/cgscore_experiments/runsh/../defense_method/RGCN.py", line 56, in test_rgcn |
| gcn.fit(features, adj, labels, idx_train, idx_val, train_iters=200, verbose=True) |
| File "/home/yiren/new_ssd2/chunhui/yaning/project/cgscore/deeprobust/graph/defense/r_gcn.py", line 238, in fit |
| self.adj_norm1 = self._normalize_adj(adj, power=-1/2) |
| File "/home/yiren/new_ssd2/chunhui/yaning/project/cgscore/deeprobust/graph/defense/r_gcn.py", line 338, in _normalize_adj |
| A = adj + torch.eye(len(adj)).to(self.device) |
| torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.45 GiB. GPU |
| cuda: True |
| Loading pubmed dataset... |
| === training rgcn model === |
| Epoch 0, training loss: 57.70949172973633 |
| Epoch 10, training loss: 11.631561279296875 |
| Epoch 20, training loss: 9.89040756225586 |
| Epoch 30, training loss: 9.21313190460205 |
| Epoch 40, training loss: 8.806578636169434 |
| Epoch 50, training loss: 8.527679443359375 |
| Epoch 60, training loss: 8.309123992919922 |
| Epoch 70, training loss: 8.130915641784668 |
| Epoch 80, training loss: 7.9674906730651855 |
| Epoch 90, training loss: 7.852350234985352 |
| Epoch 100, training loss: 7.710142612457275 |
| Epoch 110, training loss: 7.603769302368164 |
| Epoch 120, training loss: 7.498927593231201 |
| Epoch 130, training loss: 7.4149346351623535 |
| Epoch 140, training loss: 7.333662033081055 |
| Epoch 150, training loss: 7.271054267883301 |
| Epoch 160, training loss: 7.213706970214844 |
| Epoch 170, training loss: 7.162234783172607 |
| Epoch 180, training loss: 7.0984721183776855 |
| Epoch 190, training loss: 7.053898811340332 |
| === picking the best model according to the performance on validation === |
| Test set results: loss= 0.5612 accuracy= 0.8472 |
| cuda: True |
| Loading pubmed dataset... |
| === training rgcn model === |
| Epoch 0, training loss: 56.90625 |
| Epoch 10, training loss: 11.553112983703613 |
| Epoch 20, training loss: 9.830387115478516 |
| Epoch 30, training loss: 9.136892318725586 |
| Epoch 40, training loss: 8.765497207641602 |
| Epoch 50, training loss: 8.475526809692383 |
| Epoch 60, training loss: 8.280681610107422 |
| Epoch 70, training loss: 8.087503433227539 |
| Epoch 80, training loss: 7.928478240966797 |
| Epoch 90, training loss: 7.811628818511963 |
| Epoch 100, training loss: 7.672886371612549 |
| Epoch 110, training loss: 7.58695125579834 |
| Epoch 120, training loss: 7.475732326507568 |
| Epoch 130, training loss: 7.411134243011475 |
| Epoch 140, training loss: 7.32908296585083 |
| Epoch 150, training loss: 7.261781215667725 |
| Epoch 160, training loss: 7.1920905113220215 |
| Epoch 170, training loss: 7.137812614440918 |
| Epoch 180, training loss: 7.085836410522461 |
| Epoch 190, training loss: 7.047794342041016 |
| === picking the best model according to the performance on validation === |
| Test set results: loss= 0.5899 accuracy= 0.8144 |
| cuda: True |
| Loading pubmed dataset... |
| === training rgcn model === |
| Epoch 0, training loss: 55.918418884277344 |
| Epoch 10, training loss: 11.583917617797852 |
| Epoch 20, training loss: 9.933557510375977 |
| Epoch 30, training loss: 9.269057273864746 |
| Epoch 40, training loss: 8.910764694213867 |
| Epoch 50, training loss: 8.666853904724121 |
| Epoch 60, training loss: 8.472833633422852 |
| Epoch 70, training loss: 8.292617797851562 |
| Epoch 80, training loss: 8.157937049865723 |
| Epoch 90, training loss: 8.016331672668457 |
| Epoch 100, training loss: 7.9114789962768555 |
| Epoch 110, training loss: 7.8105926513671875 |
| Epoch 120, training loss: 7.71514892578125 |
| Epoch 130, training loss: 7.642545223236084 |
| Epoch 140, training loss: 7.555686950683594 |
| Epoch 150, training loss: 7.506415367126465 |
| Epoch 160, training loss: 7.446423530578613 |
| Epoch 170, training loss: 7.402177333831787 |
| Epoch 180, training loss: 7.349700450897217 |
| Epoch 190, training loss: 7.307088375091553 |
| === picking the best model according to the performance on validation === |
| Test set results: loss= 1.0091 accuracy= 0.5464 |
| cuda: True |
| Loading flickr dataset... |
| === training rgcn model === |
| Epoch 0, training loss: 22.74506187438965 |
| Epoch 10, training loss: 5.63859748840332 |
| Epoch 20, training loss: 5.708770751953125 |
| Epoch 30, training loss: 5.554620742797852 |
| Epoch 40, training loss: 5.412952423095703 |
| Epoch 50, training loss: 5.412877559661865 |
| Epoch 60, training loss: 5.3626885414123535 |
| Epoch 70, training loss: 5.331405162811279 |
| Epoch 80, training loss: 5.378969192504883 |
| Epoch 90, training loss: 5.297276020050049 |
| Epoch 100, training loss: 5.371973037719727 |
| Epoch 110, training loss: 5.319071292877197 |
| Epoch 120, training loss: 5.3518147468566895 |
| Epoch 130, training loss: 5.261008262634277 |
| Epoch 140, training loss: 5.28603458404541 |
| Epoch 150, training loss: 5.305586338043213 |
| Epoch 160, training loss: 5.2708845138549805 |
| Epoch 170, training loss: 5.316929817199707 |
| Epoch 180, training loss: 5.216168403625488 |
| Epoch 190, training loss: 5.198664665222168 |
| === picking the best model according to the performance on validation === |
| Test set results: loss= 1.9881 accuracy= 0.3637 |
| cuda: True |
| Loading flickr dataset... |
| === training rgcn model === |
| Epoch 0, training loss: 22.795940399169922 |
| Epoch 10, training loss: 5.677748680114746 |
| Epoch 20, training loss: 5.569639205932617 |
| Epoch 30, training loss: 5.455380916595459 |
| Epoch 40, training loss: 5.5096659660339355 |
| Epoch 50, training loss: 5.381861686706543 |
| Epoch 60, training loss: 5.4215779304504395 |
| Epoch 70, training loss: 5.287722110748291 |
| Epoch 80, training loss: 5.3233160972595215 |
| Epoch 90, training loss: 5.311211585998535 |
| Epoch 100, training loss: 5.3098907470703125 |
| Epoch 110, training loss: 5.268268585205078 |
| Epoch 120, training loss: 5.320863246917725 |
| Epoch 130, training loss: 5.421219348907471 |
| Epoch 140, training loss: 5.323326587677002 |
| Epoch 150, training loss: 5.297636032104492 |
| Epoch 160, training loss: 5.2516021728515625 |
| Epoch 170, training loss: 5.263626575469971 |
| Epoch 180, training loss: 5.30491304397583 |
| Epoch 190, training loss: 5.260586261749268 |
| === picking the best model according to the performance on validation === |
| Test set results: loss= 1.9864 accuracy= 0.3962 |
| cuda: True |
| Loading flickr dataset... |
| === training rgcn model === |
| Epoch 0, training loss: 23.085922241210938 |
| Epoch 10, training loss: 5.633774757385254 |
| Epoch 20, training loss: 5.56341552734375 |
| Epoch 30, training loss: 5.53253173828125 |
| Epoch 40, training loss: 5.375304222106934 |
| Epoch 50, training loss: 5.400635242462158 |
| Epoch 60, training loss: 5.372068881988525 |
| Epoch 70, training loss: 5.344683647155762 |
| Epoch 80, training loss: 5.333987236022949 |
| Epoch 90, training loss: 5.397330284118652 |
| Epoch 100, training loss: 5.288071155548096 |
| Epoch 110, training loss: 5.29317045211792 |
| Epoch 120, training loss: 5.3733978271484375 |
| Epoch 130, training loss: 5.22993803024292 |
| Epoch 140, training loss: 5.215515613555908 |
| Epoch 150, training loss: 5.330245494842529 |
| Epoch 160, training loss: 5.3847174644470215 |
| Epoch 170, training loss: 5.29070520401001 |
| Epoch 180, training loss: 5.3520588874816895 |
| Epoch 190, training loss: 5.251443862915039 |
| === picking the best model according to the performance on validation === |
| Test set results: loss= 1.9803 accuracy= 0.3749 |
| cuda: True |
| Loading flickr dataset... |
| Traceback (most recent call last): |
| File "/home/yiren/new_ssd2/chunhui/yaning/project/cgscore/examples/graph/cgscore_experiments/runsh/../defense_method/RGCN.py", line 46, in <module> |
| perturbed_adj = torch.load(ptb_path) |
| File "/home/yiren/new_ssd2/chunhui/miniconda/envs/cgscore/lib/python3.9/site-packages/torch/serialization.py", line 1025, in load |
| return _load(opened_zipfile, |
| File "/home/yiren/new_ssd2/chunhui/miniconda/envs/cgscore/lib/python3.9/site-packages/torch/serialization.py", line 1446, in _load |
| result = unpickler.load() |
| File "/home/yiren/new_ssd2/chunhui/miniconda/envs/cgscore/lib/python3.9/site-packages/torch/serialization.py", line 1416, in persistent_load |
| typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location)) |
| File "/home/yiren/new_ssd2/chunhui/miniconda/envs/cgscore/lib/python3.9/site-packages/torch/serialization.py", line 1390, in load_tensor |
| wrap_storage=restore_location(storage, location), |
| File "/home/yiren/new_ssd2/chunhui/miniconda/envs/cgscore/lib/python3.9/site-packages/torch/serialization.py", line 390, in default_restore_location |
| result = fn(storage, location) |
| File "/home/yiren/new_ssd2/chunhui/miniconda/envs/cgscore/lib/python3.9/site-packages/torch/serialization.py", line 270, in _cuda_deserialize |
| return obj.cuda(device) |
| File "/home/yiren/new_ssd2/chunhui/miniconda/envs/cgscore/lib/python3.9/site-packages/torch/_utils.py", line 114, in _cuda |
| untyped_storage = torch.UntypedStorage( |
| torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 220.00 MiB. GPU has a total capacity of 47.41 GiB of which 79.38 MiB is free. Process 460136 has 47.04 GiB memory in use. Including non-PyTorch memory, this process has 260.00 MiB memory in use. Of the allocated memory 0 bytes is allocated by PyTorch, and 0 bytes is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) |
| cuda: True |
| Loading flickr dataset... |
| Traceback (most recent call last): |
| File "/home/yiren/new_ssd2/chunhui/yaning/project/cgscore/examples/graph/cgscore_experiments/runsh/../defense_method/RGCN.py", line 46, in <module> |
| perturbed_adj = torch.load(ptb_path) |
| File "/home/yiren/new_ssd2/chunhui/miniconda/envs/cgscore/lib/python3.9/site-packages/torch/serialization.py", line 1025, in load |
| return _load(opened_zipfile, |
| File "/home/yiren/new_ssd2/chunhui/miniconda/envs/cgscore/lib/python3.9/site-packages/torch/serialization.py", line 1446, in _load |
| result = unpickler.load() |
| File "/home/yiren/new_ssd2/chunhui/miniconda/envs/cgscore/lib/python3.9/site-packages/torch/serialization.py", line 1416, in persistent_load |
| typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location)) |
| File "/home/yiren/new_ssd2/chunhui/miniconda/envs/cgscore/lib/python3.9/site-packages/torch/serialization.py", line 1390, in load_tensor |
| wrap_storage=restore_location(storage, location), |
| File "/home/yiren/new_ssd2/chunhui/miniconda/envs/cgscore/lib/python3.9/site-packages/torch/serialization.py", line 390, in default_restore_location |
| result = fn(storage, location) |
| File "/home/yiren/new_ssd2/chunhui/miniconda/envs/cgscore/lib/python3.9/site-packages/torch/serialization.py", line 270, in _cuda_deserialize |
| return obj.cuda(device) |
| File "/home/yiren/new_ssd2/chunhui/miniconda/envs/cgscore/lib/python3.9/site-packages/torch/_utils.py", line 114, in _cuda |
| untyped_storage = torch.UntypedStorage( |
| torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 220.00 MiB. GPU has a total capacity of 47.41 GiB of which 79.38 MiB is free. Process 460136 has 47.04 GiB memory in use. Including non-PyTorch memory, this process has 260.00 MiB memory in use. Of the allocated memory 0 bytes is allocated by PyTorch, and 0 bytes is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) |
| cuda: True |
| Loading flickr dataset... |
| === training rgcn model === |
| Epoch 0, training loss: 22.901403427124023 |
| Epoch 10, training loss: 5.607109546661377 |
| Epoch 20, training loss: 5.419294357299805 |
| Epoch 30, training loss: 5.272122859954834 |
| Epoch 40, training loss: 5.107397079467773 |
| Epoch 50, training loss: 4.995242595672607 |
| Epoch 60, training loss: 4.759927749633789 |
| Epoch 70, training loss: 4.839883327484131 |
| Epoch 80, training loss: 4.8474531173706055 |
| Epoch 90, training loss: 4.736181735992432 |
| Epoch 100, training loss: 4.5720672607421875 |
| Epoch 110, training loss: 4.5880889892578125 |
| Epoch 120, training loss: 4.55953311920166 |
| Epoch 130, training loss: 4.67167854309082 |
| Epoch 140, training loss: 4.461329936981201 |
| Epoch 150, training loss: 4.494113922119141 |
| Epoch 160, training loss: 4.5547966957092285 |
| Epoch 170, training loss: 4.530261993408203 |
| Epoch 180, training loss: 4.487424850463867 |
| Epoch 190, training loss: 4.4200263023376465 |
| === picking the best model according to the performance on validation === |
| Test set results: loss= 1.7739 accuracy= 0.4734 |
| cuda: True |
| Loading flickr dataset... |
| Traceback (most recent call last): |
| File "/home/yiren/new_ssd2/chunhui/yaning/project/cgscore/examples/graph/cgscore_experiments/runsh/../defense_method/RGCN.py", line 46, in <module> |
| perturbed_adj = torch.load(ptb_path) |
| File "/home/yiren/new_ssd2/chunhui/miniconda/envs/cgscore/lib/python3.9/site-packages/torch/serialization.py", line 1025, in load |
| return _load(opened_zipfile, |
| File "/home/yiren/new_ssd2/chunhui/miniconda/envs/cgscore/lib/python3.9/site-packages/torch/serialization.py", line 1446, in _load |
| result = unpickler.load() |
| File "/home/yiren/new_ssd2/chunhui/miniconda/envs/cgscore/lib/python3.9/site-packages/torch/serialization.py", line 1416, in persistent_load |
| typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location)) |
| File "/home/yiren/new_ssd2/chunhui/miniconda/envs/cgscore/lib/python3.9/site-packages/torch/serialization.py", line 1390, in load_tensor |
| wrap_storage=restore_location(storage, location), |
| File "/home/yiren/new_ssd2/chunhui/miniconda/envs/cgscore/lib/python3.9/site-packages/torch/serialization.py", line 390, in default_restore_location |
| result = fn(storage, location) |
| File "/home/yiren/new_ssd2/chunhui/miniconda/envs/cgscore/lib/python3.9/site-packages/torch/serialization.py", line 270, in _cuda_deserialize |
| return obj.cuda(device) |
| File "/home/yiren/new_ssd2/chunhui/miniconda/envs/cgscore/lib/python3.9/site-packages/torch/_utils.py", line 114, in _cuda |
| untyped_storage = torch.UntypedStorage( |
| torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 220.00 MiB. GPU has a total capacity of 47.41 GiB of which 79.38 MiB is free. Process 460136 has 47.04 GiB memory in use. Including non-PyTorch memory, this process has 260.00 MiB memory in use. Of the allocated memory 0 bytes is allocated by PyTorch, and 0 bytes is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) |
|
|