--- license: apple-amlr --- # FlexTok: Resampling Images into 1D Token Sequences of Flexible Length [`Website`](https://flextok.epfl.ch) | [`arXiv`](https://arxiv.org/abs/2502.13967) | [`GitHub`](https://github.com/apple/ml-flextok) | [`🤗 Demo`](https://huggingface.co/spaces/EPFL-VILAB/FlexTok) | [`BibTeX`](#citation) Official implementation and pre-trained models for:
[**FlexTok: Resampling Images into 1D Token Sequences of Flexible Length**](https://arxiv.org/abs/2502.13967), arXiv 2025
*[Roman Bachmann](https://roman-bachmann.github.io/)\*, [Jesse Allardice](https://github.com/JesseAllardice)\*, [David Mizrahi](https://dmizrahi.com/)\*, [Enrico Fini](https://scholar.google.com/citations?user=OQMtSKIAAAAJ), [OÄŸuzhan Fatih Kar](https://ofkar.github.io/), [Elmira Amirloo](https://elamirloo.github.io/), [Alaaeldin El-Nouby](https://aelnouby.github.io/), [Amir Zamir](https://vilab.epfl.ch/zamir/), [Afshin Dehghan](https://scholar.google.com/citations?user=wcX-UW4AAAAJ)* ## Installation For install instructions, please see https://github.com/apple/ml-flextok. ## Usage To load the 8-channel VAE-GAN directly from HuggingFace Hub and autoencode a sample image, call: ```python from diffusers.models import AutoencoderKL from flextok.utils.demo import imgs_from_urls vae = AutoencoderKL.from_pretrained( 'EPFL-VILAB/flextok_vae_c8', low_cpu_mem_usage=False ).eval() # Load example images of shape (B, 3, H, W), normalized to [-1,1] imgs = imgs_from_urls(urls=['https://storage.googleapis.com/flextok_site/nb_demo_images/0.png']) # Autoencode with the VAE latents = vae.encode(imgs).latent_dist.sample() # Shape (B, 8, H//8, W//8) reconst = vae.decode(latents).sample # Shape (B, 3, H, W) ``` ## Citation If you find this repository helpful, please consider citing our work: ``` @article{flextok, title={{FlexTok}: Resampling Images into 1D Token Sequences of Flexible Length}, author={Roman Bachmann and Jesse Allardice and David Mizrahi and Enrico Fini and O{\u{g}}uzhan Fatih Kar and Elmira Amirloo and Alaaeldin El-Nouby and Amir Zamir and Afshin Dehghan}, journal={arXiv 2025}, year={2025}, } ``` ## License The model weights in this repository are released under the Apple Model License for Research.