# Path to the root directory of the test set.
```
* **format**: Defines the annotation format of your dataset. This must match the format of your dataset annotations.
* `tfs`: TensorFlow Object Detection API format
* `coco`: COCO dataset format (JSON annotations)
* `pascal_voc`: Pascal VOC XML annotation format
* `darknet_yolo`: YOLO Darknet text file annotations
* **dataset_name**: Specifies the dataset you are using. This can be a well-known dataset like coco, pascal_voc, or a custom_dataset if you have your own data and it follows the logic below:
| Dataset Name | Allowed Formats | Description |
|------------------|-------------------------|----------------------------------------------------------------------------------------------|
| `coco` | `coco`, `tfs` | Native COCO format or TFS TensorFlow format |
| `pascal_voc` | `pascal_voc`, `tfs` | Native Pascal VOC format or TFS TensorFlow format |
| `darknet_yolo` | `darknet_yolo`, `tfs` | Native Darknet YOLO format or TFS TensorFlow format |
| `custom_dataset` | `tfs` | Only TFS TensorFlow format; in case the dataset is already converted before evaluation |
In this example, we are using the `coco` dataset in the `tfs` format and the path to the validation set is provided in the `test_path` parameter.
The state machine below describes the rules to follow when handling dataset paths for the evaluation.

When working with a dataset for the first time, we suggest setting the `check_image_files` attribute to True. This will enable the system to load each image file and identify any corrupt, unsupported, or non-image files. The path to any problematic files will be reported.
In cases where there is no validation set path or test set provided to evaluate the model trained using the training service, the available data under the `training_path` directory is split into two to create a training set and a validation set. By default, 80% of the data is used for training and the remaining 20% is used for the validation set in the evaluation service.
If you want to use a different split ratio, you need to specify the percentage to be used for the validation set in the `validation_split` parameter (to ensure consistency in the [training](./README_TRAINING.md) and evaluation process, you must specify the same validation_split parameter value in both the training and evaluation services), as shown in the YAML example below:
```yaml
dataset:
format: tfs
dataset_name: coco
class_names: [person]
training_path: ./datasets/COCO_2017_person/
validation_path:
validation_split: 0.20
test_path:
```