--- license: mit task_categories: - question-answering - text-generation tags: - automation - home - assistant language: ["en", "es", "fr", "de", "pl"] pretty_name: Home Assistant Requests V2 size_categories: - 10K NOTE: If you are viewing this dataset on HuggingFace, you can download the "small" dataset variant directly from the "Files and versions" tab. ## Assembling the dataset The dataset is generated from the different CSV "piles". The "piles" contain different chunks of requests that are assembled into a final context that is presented to the LLM. For example, `piles//pile_of_device_names.csv` contains only names of various devices to be used as part of context as well as inserted into `piles//pile_of_templated_actions.csv` and `piles//pile_of_status_requests.csv`. The logic for assembling the final dataset from the piles is contained in [generate_data.py](./generate_data.py). ### Prepare environment Start by installing system dependencies: `sudo apt-get install python3-dev` Then create a Python virtual environment and install all necessary library: ``` python3 -m venv .generate_data source .generate_data/bin/activate pip3 install -r requirements.txt ``` ### Generating the dataset from piles `python3 generate_data.py --train --test --small --language english german french spanish polish` Supported dataset splits are `--test`, `--train`, & `--sample` Arguments to set the train dataset size are `--small`, `--medium`, `--large`, & `--xl`. Languages can be enabled using `--language english german french spanish polish`