--- license: cc-by-2.0 task_categories: - object-detection language: - en tags: - traffic - trafficsigns - streetview - yolo --- # TT100K Dataset The [Tsinghua-Tencent 100K (TT100K)](https://cg.cs.tsinghua.edu.cn/traffic-sign/) is a large-scale traffic sign benchmark dataset created from 100,000 Tencent Street View panoramas. This dataset is specifically designed for traffic sign detection and classification in real-world conditions, providing researchers and developers with a comprehensive resource for building robust traffic sign recognition systems. The dataset contains **100,000 images** with over **30,000 traffic sign instances** across **221 different categories**. These images capture large variations in illuminance, weather conditions, viewing angles, and distances, making it ideal for training models that need to perform reliably in diverse real-world scenarios. This dataset is particularly valuable for: - Autonomous driving systems - Advanced driver assistance systems (ADAS) - Traffic monitoring applications - Urban planning and traffic analysis - Computer vision research in real-world conditions ![sample_with_bboxes](https://cdn-uploads.huggingface.co/production/uploads/60f6ff297666eeb11bc2b8d7/TLLyjmgoY7BTt5_WjIUSO.png) ## Key Features The TT100K dataset provides several key advantages: - **Scale**: 100,000 high-resolution images (2048×2048 pixels) - **Diversity**: 221 traffic sign categories covering Chinese traffic signs - **Real-world conditions**: Large variations in weather, illumination, and viewing angles - **Rich annotations**: Each sign includes class label, bounding box, and pixel mask - **Comprehensive coverage**: Includes prohibitory, warning, mandatory, and informative signs - **Train/Test split**: Pre-defined splits for consistent evaluation ## Dataset Structure The TT100K dataset is split into three subsets: 1. **Training Set**: The primary collection of traffic-scene images used to train models for detecting and classifying different types of traffic signs. 2. **Validation Set**: A subset used during model development to monitor performance and tune hyperparameters. 3. **Test Set**: A held-out collection of images used to evaluate the final model's ability to detect and classify traffic signs in real-world scenarios. The TT100K dataset includes 221 traffic sign categories organized into several major groups: **Speed Limit Signs (pl*, pm*)** 1. **pl\_**: Prohibitory speed limits (pl5, pl10, pl20, pl30, pl40, pl50, pl60, pl70, pl80, pl100, pl120) 2. **pm\_**: Minimum speed limits (pm5, pm10, pm20, pm30, pm40, pm50, pm55) **Prohibitory Signs (p*, pn*, pr\_)** 1. **p1-p28**: General prohibitory signs (no entry, no parking, no stopping, etc.) 2. **pn/pne**: No entry and no parking signs 3. **pr**: Various restriction signs (pr10, pr20, pr30, pr40, pr50, etc.) **Warning Signs (w\_)** 1. **w1-w66**: Warning signs for various road hazards, conditions, and situations 2. Includes pedestrian crossings, sharp turns, slippery roads, animals, construction, etc. **Height/Width Limit Signs (ph*, pb*)** 1. **ph\_**: Height limit signs (ph2, ph2.5, ph3, ph3.5, ph4, ph4.5, ph5, etc.) 2. **pb\_**: Width limit signs **Informative Signs (i*, il*, io, ip)** 1. **i1-i15**: General informative signs 2. **il\_**: Speed limit information (il60, il80, il100, il110) 3. **io**: Other informative signs 4. **ip**: Information plates ### Usage ```python from datasets import load_dataset # Load the dataset dataset = load_dataset("PrashantDixit0/TT-100K") # Access splits train_data = dataset['train'] val_data = dataset['val'] test_data = dataset['test'] # Example: Load first image from PIL import Image import io sample = train_data[0] image = Image.open(BytesIO(base64.b64decode(sample["image"]["bytes"])) image.show() ``` ### Citation If you use this dataset, please cite the original TT-100K paper: ```bibtex @inproceedings{zhu2016traffic, title={Traffic-sign detection and classification in the wild}, author={Zhu, Zhe and Liang, Dun and Zhang, Songhai and Huang, Xiaolei and Li, Baoli and Hu, Shimin}, booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition}, pages={2110--2118}, year={2016} } ```