--- license: cc-by-4.0 tags: - science - material - inverse - design size_categories: - 10M # OptoLlama Dataset [![License](https://img.shields.io/badge/Licenses-CC_BY_4.0-green)](./LICENSE) [![Contact](https://img.shields.io/badge/Contact-SEAOPT-blue)](mailto:SE-AOPT-office@helmholtz-berlin.de) ## Details The original dataset can be found in the [OptoGPT publication 📝](https://arxiv.org/abs/2304.10294) and here on [HuggingFace](https://huggingface.co/datasets/mataigao/optogpt_data). **Key Enhancements** - Inclusion of an **absorption** feature in the model ➕📈 - Increased the **wave length** range to 300-2,000nm 💡 ## Structure ``` ├── materials/ │ ├── Ag.csv │ ├── Al.csv │ ├── ... │ └── ZnSe.csv ├── train/ │ ├── train-0.safetensors │ ├── train-1.safetensors │ ├── ... │ └── train-9.safetensors ├── test/ │ └── test.safetensors └── tokens.json ``` Each `*.safetensors` file contains 1 million thin film structures (indexed by `tokens.json`) as well as their simulated absorption, reflection and transmission (RAT) spectrum. The spectrum tensors have the shape *(n_samples, 3 [RAT], 171 [bins],)* and are stored in `float16`. The thin film layers have the shape *(n_samples, 21 [max_depth incl. EOS and PAD],)* and are stored as `long`. The thin film layer sequence is ordered from top to bottom, i.e., the lowest index is the top of the thin film layers (touches the air), and the highest index is the bottom layer (touching the back substrate). ## Loading Data This shows an example on how to load (a subset) of the entire data ```python from safetensors.torch import load_file data = load_file("train/train-0.safetensors") spectra = data['spectra'] thin_films = data['thin_films'] print(spectra.shape, thin_films.shape) >>> torch.Size([1000000, 3, 171]) torch.Size([1000000, 21]) ``` ## Details - All spectra have been simulated with [tmm_fast](https://github.com/MLResearchAtOSRAM/tmm_fast) - The *n* (refractive index), *k* (extinction coefficients) and *wl* (wavelength) of all materials can be found in CSV files in the `/materials` folder - The training and test data are pre-split and can be found in the `/train` and `/test` folder respectively - A full list of the vocabular, i.e., all possible tokens, can be found in the `tokens.json`, with the the format of `_` - There are additional tokens for end of sequences (EOS), padding (PAD) and masking (MASK) - We offer a sub-samples version of the entire test dataset called cropped | Train samples | Test samples | Test samples (cropped) | | --------------: | -----------: | ---------------------: | | 10,000,000 | 1,000,000 | 128,000 | ## Acknowledgements This work is supported by the Helmholtz Association Initiative and Networking Fund through the Helmholtz AI platform, and the HAICORE@KIT grant. ## Citations If you find our work helpful, please feel free to cite as following: ``` @article{ma2024optogpt, title={OptoGPT: a foundation model for inverse design in optical multilayer thin film structures}, author={Ma, Taigao and Wang, Haozhu and Guo, L Jay}, journal={Opto-Electronic Advances}, volume={7}, number={7}, year={2024}, publisher={Opto-Electronic Advance}, doi={10.29026/oea.2024.240062} } ``` ----