Improve dataset card: Add task category and sample usage from GitHub README

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +38 -1
README.md CHANGED
@@ -1,12 +1,14 @@
1
  ---
2
  license: cdla-permissive-2.0
 
3
  tags:
4
  - medical
5
  - biology
6
  - ieeg
7
  - seizure
8
  - epilepsy
9
- pretty_name: SWEC iEEG Dataset
 
10
  ---
11
 
12
  ## SWEC iEEG Dataset
@@ -57,6 +59,41 @@ Every file contains attributes `patient` with the patient ID, `channels` with th
57
 
58
  ---
59
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
60
  ## Dataset curation
61
 
62
  ### Preparation
 
1
  ---
2
  license: cdla-permissive-2.0
3
+ pretty_name: SWEC iEEG Dataset
4
  tags:
5
  - medical
6
  - biology
7
  - ieeg
8
  - seizure
9
  - epilepsy
10
+ task_categories:
11
+ - time-series-forecasting
12
  ---
13
 
14
  ## SWEC iEEG Dataset
 
59
 
60
  ---
61
 
62
+ ## Sample Usage
63
+
64
+ This section provides basic usage examples for inference and training with MVPFormer, the model associated with this dataset. For more details, refer to the [MVPFormer GitHub repository](https://github.com/IBM/multi-variate-parallel-transformer).
65
+
66
+ To prepare the environment for running MVPFormer you need a mixture of pip and compilation from source.
67
+
68
+ ### Install Python Packages
69
+
70
+ The `requirements.txt` file is provided in the MVPFormer repository. Simply install all requirements with `pip install -r requirements.txt`.
71
+
72
+ ### DeepSpeed
73
+
74
+ You have to compile [`DeepSpeed`](https://www.deepspeed.ai/tutorials/advanced-install/) manually to activate some necessary extensions. The procedure can vary based on your software and hardware stack, here we report our reference installation steps.
75
+
76
+ ```bash
77
+ DS_BUILD_FUSED_ADAM=1 DS_BUILD_FUSED_LAMB=1 pip install --no-cache-dir deepspeed --global-option="build_ext" --global-option="-j8"
78
+ ```
79
+
80
+ ### Inference with MVPFormer
81
+
82
+ We use `PyTorch Lightning` to distribute reproducible configuration files for our experiments. The example testing configuration file can be found in the `configs` folder of the MVPFormer repository. You can start testing with:
83
+ ```bash
84
+ python main.py test --config configs/mvpformer_classification.yaml --model.init_args.base_model '<base_checkpoint_path>' --model.init_args.head_model '<head_checkpoint_path>' --data.init_args.folder '<dataset_path>' --data.init_args.test_patients ['<dataset_subject>']
85
+ ```
86
+
87
+ ### Training MVPFormer
88
+
89
+ We use `PyTorch Lightning` to distribute reproducible configuration files for our experiments. The example testing configuration file can be found in the `configs` folder of the MVPFormer repository. You can start training with:
90
+ ```bash
91
+ python main.py fit --config configs/mvpformer_classification.yaml --model.init_args.base_model '<base_checkpoint_path>' --model.init_args.head_model '<head_checkpoint_path>' --data.init_args.folder '<dataset_path>' --data.init_args.train_patients ['<dataset_subject>']
92
+ ```
93
+ The example parameters are equivalent to what we have used to train MVPFormer, except in the hardware setup such as the number of GPUs and the number of CPU workers.
94
+
95
+ ---
96
+
97
  ## Dataset curation
98
 
99
  ### Preparation