Datasets:
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Citation
This dataset is used in the following publications. Please cite the following if you use any part of it.
@inproceedings{guler2025robust,
title={Robust Channel Representation for Wireless: A Multi-Task Masked Contrastive Approach},
author={Guler, Berkay and Geraci, Giovanni and Jafarkhani, Hamid},
booktitle={NeurIPS 2025 Workshop on AI for Next Generation Wireless (AI4NextG)},
year={2025},
url={https://openreview.net/forum?id=KXNDs9ZGb9}
}
@article{guler2025multitask,
title={A Multi-Task Foundation Model for Wireless Channel Representation Using Contrastive and Masked Autoencoder Learning},
author={Guler, Berkay and Geraci, Giovanni and Jafarkhani, Hamid},
journal={arXiv preprint arXiv:2505.09160},
year={2025},
url={https://arxiv.org/abs/2505.09160}
}
Loading the Data
from pathlib import Path
import numpy as np
from tqdm import tqdm
def get_sample_file_content(file_path):
def get_key_data_pairs(file_path, allow_empty=False):
data_dict = {}
with np.load(file_path, allow_pickle=True) as file_content:
keys = file_content.keys()
for key in keys:
if key not in data_dict:
if not allow_empty and len(file_content[key].shape) == 0:
continue
data_dict[key] = file_content[key]
return data_dict
for file in tqdm(file_path.iterdir(), desc="Loading pretraining files"):
if file.is_file() and not file.name.startswith("."):
return get_key_data_pairs(file)
pretrain_path = Path.cwd() / "sub6" / "pretrain"
sample_data = get_sample_file_content(pretrain_path)
for key, data in sample_data.items():
print(f"Key {key} has shape:\t{data.shape}")
Loading pretraining files: 0it [00:00, ?it/s]
Key channels has shape: (10451, 1, 32, 32)
Key rx_pos has shape: (10451, 3)
Key tx_pos has shape: (1, 3)
Key los has shape: (10451,)
Key active_mask_original_indices has shape: (10451,)
Key beam_labels has shape: (10451, 3)
channelsstores the channels with shape(number of channels, number of rx antennas, number of tx antennas, number of subcarriers)rx_posstores the 3D positions of the userstx_posstores the TX positionlosstores a binary array of0's and1's corresponding to nLoS and LoS channels, respectively.beam_labels[:0]stores the best beam index from a uniform codebook with size 16beam_labels[:1]stores the best beam index from a uniform codebook with size 32beam_labels[:2]stores the best beam index from a uniform codebook with size 32
Folder Structure
βββ sub 6/ # Contains 3.5 GHz channel samples
β βββ pretrain/ # Used for pretraining
β β βββ boston5g_3p5_bs000.npz
β β βββ city_66_bruxelles_3p5_bs000.npz
β β βββ ...
β βββ task/ # Used for downstream task evaluations
β βββ train/ # Can be used if finetuning is performed
β β βββ city_7_sandiego_3p5_bs000.npz
β β βββ city_7_sandiego_3p5_bs001.npz
β β βββ ...
β βββ test/ # Held-out test files for downstream tasks
β βββ city_3_houston_3p5_bs000.npz
β βββ city_3_houston_3p5_bs001.npz
β βββ ...
β
βββ mmwave/ # Contains 28 GHz channel samples
βββ task/ # Used for downstream task evaluations
βββ train/ # Can be used if finetuning is performed
β βββ city_7_sandiego_28_bs000.npz
β βββ city_7_sandiego_28_bs001.npz
β βββ ...
βββ test/ # Held-out test files for downstream tasks
βββ city_3_houston_28_bs000.npz
βββ city_3_houston_28_bs001.npz
βββ ...
Dataset Properties
| Parameter | Value |
|---|---|
| Num. Subcarriers | 32 |
| Subcarrier spacing | 30 kHz |
| Total bandwidth | 960 kHz |
| TX Antennas | 32 (half-wavelength spacing) |
| Radiation pattern | Isotropic, single polarization |
| Maximum propagation paths | 20 |
Dataset Generation
Using the DeepMIMO tool, we generate 2.5M samples at 3.5 GHz operating frequency from 56 scenarios for pretraining. Each sample corresponds to a channel for a distinct user-base station pair from one of the following scenarios:
Pretraining Scenarios (56 cities): Amsterdam, ASU Campus, Athens, Bangkok, Barcelona, Beijing, Boston, Brussels, Cairo, Cape Town, Charlotte, Chicago, Denver, Dubai, Edinburgh, Florence, Fort Worth, Fujiyoshida, Granada, Hatsukaichi, Helsinki, Hong Kong, Indianapolis, Istanbul, Jerusalem, Kyoto, Havana, Lisbon, Los Angeles, Madrid, Marrakesh, Mumbai, New Delhi, New York, North Jakarta, Oklahoma City, Philadelphia, Phoenix, Reykjavik, Rio de Janeiro, Rome, San Francisco, San Nicolas, Saint Petersburg, Santa Clara, Santiago, Seattle, Seoul, Singapore, Stockholm, Sumida City, Sydney, Taipei, Taito City, Toronto, and Warsaw.
Downstream Task Dataset
We generate a separate dataset for downstream tasks with 0.55M total samples from 10 unseen scenarios:
Downstream Scenarios (10 cities): Austin, Centro, Columbus, Dallas, Gurbchen, Houston, Miami, Montreal, Prague, and San Diego.
Similar to the pretraining dataset, some scenarios feature more than one base station with uniformly distributed users.
Additional Configurations
- 28 GHz channels are generated for downstream task scenarios with available ray-tracing data
- Downloads last month
- -