Update README.md
Browse files
README.md
CHANGED
|
@@ -17,17 +17,17 @@ datasets:
|
|
| 17 |
|
| 18 |
Welcome to **LWM** (Large Wireless Model) — a powerful, pre-trained model specifically designed for advanced feature extraction from wireless communication datasets like DeepMIMO. LWM leverages state-of-the-art transformer architectures to offer a deep, contextual understanding of wireless channels, making it the first of its kind tailored for wireless communications.
|
| 19 |
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
LWM provides a **generalized feature extraction framework** that can be applied across diverse wireless communication tasks. From predicting the strongest mmWave beams to classifying line-of-sight (LoS) and non-line-of-sight (NLoS) channels andn much more complex tasks, this model is built to handle the intricacies of complex wireless environments. **Trained on hundred thousands of wireless channel samples**, LWM has been designed to **generalize across diverse scenarios** — from urban cityscapes to synthetic environments, ensuring robust performance on a wide range of downstream tasks.
|
| 23 |
-
|
| 24 |
-
<div style="text-align: center;">
|
| 25 |
<figure>
|
| 26 |
-
<img src="images/lwm.PNG" alt="Alt text"
|
| 27 |
<figcaption>Figure 1: This figure depicts the offline pre-training and online embedding generation process for LWM. The channel is divided into fixed-size patches, which are linearly embedded and combined with positional encodings before being passed through a Transformer encoder. During self-supervised pre-training, some embeddings are masked, and LWM leverages self-attention to extract deep features, allowing the decoder to reconstruct the masked values. For downstream tasks, the generated LWM embeddings enhance performance. The right block shows the LWM architecture, inspired by the original Transformer introduced in the "Attention is all you need" paper.</figcaption>
|
| 28 |
</figure>
|
| 29 |
</div>
|
| 30 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 31 |
### How Does LWM Work?
|
| 32 |
|
| 33 |
At its core, LWM is based on the transformer architecture, which is capable of modeling both local and global dependencies within wireless channels. Unlike traditional models that focus on a narrow set of tasks, LWM uses **self-supervised learning** through a technique called **Masked Channel Modeling (MCM)**. This allows the model to learn from unlabeled wireless data, predicting masked patches within a channel, which in turn forces it to understand complex relationships between antennas and subcarriers.
|
|
|
|
| 17 |
|
| 18 |
Welcome to **LWM** (Large Wireless Model) — a powerful, pre-trained model specifically designed for advanced feature extraction from wireless communication datasets like DeepMIMO. LWM leverages state-of-the-art transformer architectures to offer a deep, contextual understanding of wireless channels, making it the first of its kind tailored for wireless communications.
|
| 19 |
|
| 20 |
+
<div>
|
|
|
|
|
|
|
|
|
|
|
|
|
| 21 |
<figure>
|
| 22 |
+
<img src="images/lwm.PNG" alt="Alt text"/>
|
| 23 |
<figcaption>Figure 1: This figure depicts the offline pre-training and online embedding generation process for LWM. The channel is divided into fixed-size patches, which are linearly embedded and combined with positional encodings before being passed through a Transformer encoder. During self-supervised pre-training, some embeddings are masked, and LWM leverages self-attention to extract deep features, allowing the decoder to reconstruct the masked values. For downstream tasks, the generated LWM embeddings enhance performance. The right block shows the LWM architecture, inspired by the original Transformer introduced in the "Attention is all you need" paper.</figcaption>
|
| 24 |
</figure>
|
| 25 |
</div>
|
| 26 |
|
| 27 |
+
### What Does LWM Offer?
|
| 28 |
+
|
| 29 |
+
LWM provides a **generalized feature extraction framework** that can be applied across diverse wireless communication tasks. From predicting the strongest mmWave beams to classifying line-of-sight (LoS) and non-line-of-sight (NLoS) channels andn much more complex tasks, this model is built to handle the intricacies of complex wireless environments. **Trained on hundred thousands of wireless channel samples**, LWM has been designed to **generalize across diverse scenarios** — from urban cityscapes to synthetic environments, ensuring robust performance on a wide range of downstream tasks.
|
| 30 |
+
|
| 31 |
### How Does LWM Work?
|
| 32 |
|
| 33 |
At its core, LWM is based on the transformer architecture, which is capable of modeling both local and global dependencies within wireless channels. Unlike traditional models that focus on a narrow set of tasks, LWM uses **self-supervised learning** through a technique called **Masked Channel Modeling (MCM)**. This allows the model to learn from unlabeled wireless data, predicting masked patches within a channel, which in turn forces it to understand complex relationships between antennas and subcarriers.
|