File size: 3,288 Bytes
db4670f
 
 
 
 
 
 
 
 
 
5c5da17
db4670f
 
 
 
5c5da17
db4670f
5c5da17
db4670f
5c5da17
 
db4670f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5c5da17
 
db4670f
 
5c5da17
 
db4670f
 
 
 
5c5da17
db4670f
5c5da17
db4670f
 
5c5da17
db4670f
5c5da17
 
 
 
 
 
 
 
 
db4670f
 
5c5da17
db4670f
5c5da17
db4670f
 
 
 
5c5da17
db4670f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
license: bsd-3-clause
library_name: braindecode
pipeline_tag: feature-extraction
tags:
  - eeg
  - biosignal
  - pytorch
  - neuroscience
  - braindecode

---

# DGCNN

DGCNN for EEG classification from Song et al. (2018) [dgcnn].

> **Architecture-only repository.** Documents the
> `braindecode.models.DGCNN` class. **No pretrained weights are
> distributed here.** Instantiate the model and train it on your own
> data.

## Quick start

```bash
pip install braindecode
```

```python
from braindecode.models import DGCNN

model = DGCNN(
    n_chans=22,
    sfreq=250,
    input_window_seconds=4.0,
    n_outputs=4,
)
```

The signal-shape arguments above are illustrative defaults — adjust to
match your recording.

## Documentation
- Full API reference: <https://braindecode.org/stable/generated/braindecode.models.DGCNN.html>
- Interactive browser (live instantiation, parameter counts):
  <https://huggingface.co/spaces/braindecode/model-explorer>
- Source on GitHub: <https://github.com/braindecode/braindecode/blob/master/braindecode/models/dgcnn.py#L253>


## Architecture

![DGCNN architecture](../_static/model/DGCNN.gif)


## Parameters

| Parameter | Type | Description |
|---|---|---|
| `chs_info` | list of dict, optional | Information about each channel, typically obtained from `mne.Info['chs']`. Each entry must contain a `'loc'` key with 3-D electrode positions so the initial adjacency matrix can be built from spatial proximity (Eq. 1). A montage must be set on the `mne.Info` object (see :meth:`mne.Info.set_montage`). If `None` or positions cannot be extracted, raised ValueError (see Notes). |
| `n_filters` | int, default=64 | Number of spectral graph-convolutional filters. This is the output feature dimension per node produced by the Chebyshev graph convolution followed by the :math:`1 \times 1` convolution (see Fig. 2 in the paper). The original code uses 64. |
| `cheb_order` | int, default=2 | Order :math:`K` of the Chebyshev polynomial approximation (Eq. 11). |
| `n_neighbors` | int, default=5 | Number of spatial nearest neighbors per node used to build the initial adjacency matrix (Eq. 1). |
| `mlp_dims` | tuple[int, ...], default=(256,) | Hidden-layer sizes of the fully connected classification head. |
| `activation` | type[nn.Module], default=nn.ReLU | Activation function class used after the graph convolution and in the classification head. |
| `drop_prob` | float, default=0.5 | Dropout probability in the classification head. |


## References

1. Song, T., Zheng, W., Song, P., & Cui, Z. (2018). EEG emotion recognition using dynamical graph convolutional neural networks. IEEE Transactions on Affective Computing, 11(3), 532-541. https://doi.org/10.1109/TAFFC.2018.2817622


## Citation

Cite the original architecture paper (see *References* above) and braindecode:

```bibtex
@article{aristimunha2025braindecode,
  title   = {Braindecode: a deep learning library for raw electrophysiological data},
  author  = {Aristimunha, Bruno and others},
  journal = {Zenodo},
  year    = {2025},
  doi     = {10.5281/zenodo.17699192},
}
```

## License

BSD-3-Clause for the model code (matching braindecode).
Pretraining-derived weights, if you fine-tune from a checkpoint,
inherit the licence of that checkpoint and its training corpus.