metadata
license: mit
task_categories:
- audio-to-audio
language:
- en
datasets:
- Blinorot/lensless_mic_librispeech
- Blinorot/lensless_mic_random
- Blinorot/lensless_mic_songdescriber
Model Card for LenslessMic Reconstruction Algorithms
Models Summary
Reconstruction algoritms from the "LenslessMic: Audio Encryption and Authentication via Lensless Computational Imaging" paper.
To download the models and work with them, use our official repository.
Models Details
The models are saved in the following format:
.
βββ checkpoint_tag
βββ checkpoint_name.pth # PyTorch checkpoint with model state dict under 'state_dict' key.
βββ config.yaml # Hydra config used to train the model
Checkpoint tag is represented in the following format:
{latent_size}_{training_dataset}_{loss_functions_used}_{reconstruction_algorithm}
- The
latent_sizeis either 16x16 or 32x32, depends on the neural audio codec used in the dataset. - The training dataset is either
randomorlibrispeech. Forlibrispeech, a groupped version can be used, tagged asgroup_n_m_r_c(see LenslessMic Version of Librispeech (with 288x288 after group if the sensor image size is not the default 256x256). The version of the model, which is fine-tuned usingtrain-other, is tagged aslibrispeech_otherand_ftat the end. - The
loss_functionis usually MSE, SSIM, and Raw SSIM, as in the paper. We also provide checkpoints with only MSE, MSE and SSIM, and all three with L1 waveform or Mel Losses. - The reconstruction algorithm:
PSF_Unet4M_U5_Unet4Mis the Learned and R-Learned methods from the paper.Unet8Mis theNoPSFmethod.
Citation
If you use these models, please cite it as follows:
@article{grinberg2025lenslessmic,
title = {LenslessMic: Audio Encryption and Authentication via Lensless Computational Imaging},
author = {Grinberg, Petr and Bezzam, Eric and Prandoni, Paolo and Vetterli, Martin},
journal = {arXiv preprint arXiv:2509.16418},
year = {2025},
}