Update README.md
Browse files
README.md
CHANGED
|
@@ -202,11 +202,11 @@ else:
|
|
| 202 |
|
| 203 |
### 8. **Tokenize and Load the Model**
|
| 204 |
|
| 205 |
-
Before we dive into tokenizing the dataset and loading the model, let's understand how the tokenization process is adapted to the wireless communication context. In this case, **tokenization** refers to segmenting each wireless channel into patches, similar to how Vision Transformers (ViTs) work with images. Each wireless channel is structured as a
|
| 206 |
|
| 207 |
The tokenization process involves **dividing the channel matrix into patches**, with each patch containing information from 16 consecutive subcarriers. These patches are then **embedded** into a 64-dimensional space, providing the Transformer with a richer context for each patch. In this process, **positional encodings** are added to preserve the structural relationships within the channel, ensuring the Transformer captures both spatial and frequency dependencies.
|
| 208 |
|
| 209 |
-
If you choose to apply **Masked Channel Modeling (MCM)** during inference (by setting `gen_raw=False`), LWM will mask certain patches, as it did during pre-training. However, for standard inference, masking isn't necessary unless you want to test LWM's
|
| 210 |
|
| 211 |
Now, let's move on to tokenize the dataset and load the pre-trained LWM model.
|
| 212 |
|
|
@@ -255,9 +255,9 @@ By selecting either `cls_emb` or `channel_emb`, you leverage the pre-trained mod
|
|
| 255 |
If your dataset requires labels, you can easily generate them using DeepMIMO data. Here's an example to create labels for either LoS/NLoS classification or beam prediction, depending on the scenario selected:
|
| 256 |
```python
|
| 257 |
from input_preprocess import create_labels
|
| 258 |
-
tasks = ['LoS/NLoS
|
| 259 |
task = tasks[1] # Choose 0 for LoS/NLoS labels or 1 for beam prediction labels.
|
| 260 |
-
labels = create_labels(task, selected_scenario_names, n_beams=64) # For beam prediction, n_beams specifies the number of beams in the codebook. If you're generating labels for LoS/NLoS classification, you can leave this value unchanged as it doesn't impact the label
|
| 261 |
```
|
| 262 |
|
| 263 |
---
|
|
|
|
| 202 |
|
| 203 |
### 8. **Tokenize and Load the Model**
|
| 204 |
|
| 205 |
+
Before we dive into tokenizing the dataset and loading the model, let's understand how the tokenization process is adapted to the wireless communication context. In this case, **tokenization** refers to segmenting each wireless channel into patches, similar to how Vision Transformers (ViTs) work with images. Each wireless channel is structured as a 32x32 matrix, where rows represent antennas and columns represent subcarriers.
|
| 206 |
|
| 207 |
The tokenization process involves **dividing the channel matrix into patches**, with each patch containing information from 16 consecutive subcarriers. These patches are then **embedded** into a 64-dimensional space, providing the Transformer with a richer context for each patch. In this process, **positional encodings** are added to preserve the structural relationships within the channel, ensuring the Transformer captures both spatial and frequency dependencies.
|
| 208 |
|
| 209 |
+
If you choose to apply **Masked Channel Modeling (MCM)** during inference (by setting `gen_raw=False`), LWM will mask certain patches, as it did during pre-training. However, for standard inference, masking isn't necessary unless you want to test LWM's robustness to noisy inputs! The printed LWM loss after inference could show you how well it has predicted the masked patches.
|
| 210 |
|
| 211 |
Now, let's move on to tokenize the dataset and load the pre-trained LWM model.
|
| 212 |
|
|
|
|
| 255 |
If your dataset requires labels, you can easily generate them using DeepMIMO data. Here's an example to create labels for either LoS/NLoS classification or beam prediction, depending on the scenario selected:
|
| 256 |
```python
|
| 257 |
from input_preprocess import create_labels
|
| 258 |
+
tasks = ['LoS/NLoS Classifcation', 'Beam Prediction']
|
| 259 |
task = tasks[1] # Choose 0 for LoS/NLoS labels or 1 for beam prediction labels.
|
| 260 |
+
labels = create_labels(task, selected_scenario_names, n_beams=64) # For beam prediction, n_beams specifies the number of beams in the codebook. If you're generating labels for LoS/NLoS classification, you can leave this value unchanged as it doesn't impact the label generation.
|
| 261 |
```
|
| 262 |
|
| 263 |
---
|