Update README.md
Browse files
README.md
CHANGED
|
@@ -37,49 +37,48 @@ Explore LWM concepts and applications in this compact video series:
|
|
| 37 |
<tr>
|
| 38 |
<td align="center">
|
| 39 |
<a href="https://www.youtube.com/watch?v=3sxJR86EFOo" target="_blank">
|
| 40 |
-
<img src="https://img.youtube.com/vi/3sxJR86EFOo/0.jpg" width="180"
|
| 41 |
-
<
|
| 42 |
-
<span style="display:inline-block;margin-top:6px;padding:4px 12px;background:#f97316;color:white;border-radius:6px;font-weight:600;">▶ Watch</span>
|
| 43 |
-
</span>
|
| 44 |
</a>
|
| 45 |
</td>
|
| 46 |
<td align="center">
|
| 47 |
<a href="https://www.youtube.com/watch?v=Coqcya9NzFs" target="_blank">
|
| 48 |
-
<img src="https://img.youtube.com/vi/Coqcya9NzFs/0.jpg" width="180"
|
| 49 |
-
<
|
| 50 |
-
</a>
|
| 51 |
</td>
|
| 52 |
<td align="center">
|
| 53 |
<a href="https://www.youtube.com/watch?v=e9KvAXMUuQg" target="_blank">
|
| 54 |
-
<img src="https://img.youtube.com/vi/e9KvAXMUuQg/0.jpg" width="180"
|
| 55 |
-
<
|
| 56 |
</a>
|
| 57 |
</td>
|
| 58 |
</tr>
|
| 59 |
<tr>
|
| 60 |
<td align="center">
|
| 61 |
<a href="https://www.youtube.com/watch?v=ZB5WVvo6q6U" target="_blank">
|
| 62 |
-
<img src="https://img.youtube.com/vi/ZB5WVvo6q6U/0.jpg" width="180"
|
| 63 |
-
<
|
| 64 |
-
</a>
|
| 65 |
</td>
|
| 66 |
<td align="center">
|
| 67 |
<a href="https://www.youtube.com/watch?v=5oNnJjos0mo" target="_blank">
|
| 68 |
-
<img src="https://img.youtube.com/vi/5oNnJjos0mo/0.jpg" width="180"
|
| 69 |
-
<
|
| 70 |
-
</a>
|
| 71 |
</td>
|
| 72 |
<td align="center">
|
| 73 |
<a href="https://www.youtube.com/watch?v=_RObWck3MMw" target="_blank">
|
| 74 |
-
<img src="https://img.youtube.com/vi/_RObWck3MMw/0.jpg" width="180"
|
| 75 |
-
<
|
| 76 |
-
</a>
|
| 77 |
</td>
|
| 78 |
</tr>
|
| 79 |
</table>
|
| 80 |
|
| 81 |
|
| 82 |
|
|
|
|
| 83 |
### How is LWM built?
|
| 84 |
|
| 85 |
The LWM model’s structure is based on transformers, allowing it to capture both **fine-grained and global dependencies** within channel data. Unlike traditional models that are limited to specific tasks, LWM employs a **self-supervised** approach through our proposed technique, Masked Channel Modeling (MCM). This method trains the model on unlabeled data by predicting masked channel segments, enabling it to learn intricate relationships between antennas and subcarriers. Utilizing **bidirectional attention**, LWM interprets the full context by attending to both preceding and succeeding channel segments, resulting in embeddings that encode comprehensive spatial information, making them applicable to a variety of scenarios.
|
|
|
|
| 37 |
<tr>
|
| 38 |
<td align="center">
|
| 39 |
<a href="https://www.youtube.com/watch?v=3sxJR86EFOo" target="_blank">
|
| 40 |
+
<img src="https://img.youtube.com/vi/3sxJR86EFOo/0.jpg" width="180"/>
|
| 41 |
+
<div style="margin-top:4px;padding:4px 12px;background:#f97316;color:white;border-radius:6px;font-weight:600;">▶ Watch</div>
|
|
|
|
|
|
|
| 42 |
</a>
|
| 43 |
</td>
|
| 44 |
<td align="center">
|
| 45 |
<a href="https://www.youtube.com/watch?v=Coqcya9NzFs" target="_blank">
|
| 46 |
+
<img src="https://img.youtube.com/vi/Coqcya9NzFs/0.jpg" width="180"/>
|
| 47 |
+
<div style="margin-top:4px;padding:4px 12px;background:#f97316;color:white;border-radius:6px;font-weight:600;">▶ Watch</div>
|
| 48 |
+
</a>
|
| 49 |
</td>
|
| 50 |
<td align="center">
|
| 51 |
<a href="https://www.youtube.com/watch?v=e9KvAXMUuQg" target="_blank">
|
| 52 |
+
<img src="https://img.youtube.com/vi/e9KvAXMUuQg/0.jpg" width="180"/>
|
| 53 |
+
<div style="margin-top:4px;padding:4px 12px;background:#f97316;color:white;border-radius:6px;font-weight:600;">▶ Watch</div>
|
| 54 |
</a>
|
| 55 |
</td>
|
| 56 |
</tr>
|
| 57 |
<tr>
|
| 58 |
<td align="center">
|
| 59 |
<a href="https://www.youtube.com/watch?v=ZB5WVvo6q6U" target="_blank">
|
| 60 |
+
<img src="https://img.youtube.com/vi/ZB5WVvo6q6U/0.jpg" width="180"/>
|
| 61 |
+
<div style="margin-top:4px;padding:4px 12px;background:#f97316;color:white;border-radius:6px;font-weight:600;">▶ Watch</div>
|
| 62 |
+
</a>
|
| 63 |
</td>
|
| 64 |
<td align="center">
|
| 65 |
<a href="https://www.youtube.com/watch?v=5oNnJjos0mo" target="_blank">
|
| 66 |
+
<img src="https://img.youtube.com/vi/5oNnJjos0mo/0.jpg" width="180"/>
|
| 67 |
+
<div style="margin-top:4px;padding:4px 12px;background:#f97316;color:white;border-radius:6px;font-weight:600;">▶ Watch</div>
|
| 68 |
+
</a>
|
| 69 |
</td>
|
| 70 |
<td align="center">
|
| 71 |
<a href="https://www.youtube.com/watch?v=_RObWck3MMw" target="_blank">
|
| 72 |
+
<img src="https://img.youtube.com/vi/_RObWck3MMw/0.jpg" width="180"/>
|
| 73 |
+
<div style="margin-top:4px;padding:4px 12px;background:#f97316;color:white;border-radius:6px;font-weight:600;">▶ Watch</div>
|
| 74 |
+
</a>
|
| 75 |
</td>
|
| 76 |
</tr>
|
| 77 |
</table>
|
| 78 |
|
| 79 |
|
| 80 |
|
| 81 |
+
|
| 82 |
### How is LWM built?
|
| 83 |
|
| 84 |
The LWM model’s structure is based on transformers, allowing it to capture both **fine-grained and global dependencies** within channel data. Unlike traditional models that are limited to specific tasks, LWM employs a **self-supervised** approach through our proposed technique, Masked Channel Modeling (MCM). This method trains the model on unlabeled data by predicting masked channel segments, enabling it to learn intricate relationships between antennas and subcarriers. Utilizing **bidirectional attention**, LWM interprets the full context by attending to both preceding and succeeding channel segments, resulting in embeddings that encode comprehensive spatial information, making them applicable to a variety of scenarios.
|