Datasets:
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
HVU_VIC
VIC is an open-source Vietnamese Image–Caption corpus, accompanied by supporting tools, created to facilitate the development of image captioning and vision–language systems, particularly for low-resource language settings. The dataset is developed by a research team at Hung Vuong University, Phu Tho, Vietnam, led by Dr. Ha Nguyen, Deputy Head of the Department of Engineering Technology. VIC was constructed using a fully automated data-building pipeline that combines web crawling from public sources, semantic extraction of image–text pairs, and heuristic/AI-assisted filtering, helping ensure reduced noise, higher consistency, and practical usability for real-world applications.
📋 Dataset Description
- Language: Vietnamese
- Task: Image Captioning (Image → Caption)
- Repo type: Dataset
- Format annotations: CSV, delimiter
| - Schema:
image|caption - #Images: 29,970
- #Captions: 29,970
- Captions per image: 1
⚙️ Creation Pipeline
The dataset was built using a 4-stage automated process:
- Selecting relevant public websites that contain images paired with Vietnamese descriptions/captions.
- Automated data crawling to collect raw webpages, associated images, and basic metadata.
- Extraction via page structure (semantic tags) to obtain clean Image–Caption pairs, then normalizing file names and captions before exporting annotations to CSV (image|caption).
- AI/heuristic-assisted filtering to remove noisy samples (broken/duplicate images, empty/very short captions, wrong language, corrupted characters) and improve overall consistency.
📊 Quality Evaluation
A captioning model with a ResNet-50 (ImageNet-pretrained) encoder and an attention-based LSTM decoder trained on VIC (30K) was evaluated by generating captions for 500 randomly sampled images.
| Metric | Score |
|---|---|
| BLEU | 19.86 |
| Semantic similarity (cosine) | 0.673 |
These results confirm that VIC is a practical resource for developing and evaluating Vietnamese image captioning and vision–language models in low-resource settings
📁 Dataset Structure
.HVU_VIC
├── 30K_IMG_1.zip
├── Captions_30k.csv
├── captioner30k.pt
├── train_csv_captioner.py
├── infer_ui.py
└── README.md
📁 Vietnamese Question Generation Tool
🛠️ Requirements
- Python 3.10+
- PyTorch >= 2.0
- torchvision
- Pillow
- gradio
- nltk (for BLEU evaluation)
- sentence-transformers (for cosine similarity evaluation, optional)
📦 Install Required Libraries
pip install pillow gradio pandas tqdm nltk sentence-transformers
(Install PyTorch + torchvision separately from pytorch.org if not installed yet.)
📥 Load Dataset from Hugging Face Hub
from datasets import load_dataset
ds = load_dataset("QUYENMANHNGUYEN/GeneratingCaptions", data_files="Captions_30k.csv", split="train", delimiter="|")
print(ds[0])
📚 Usage
- Train and evaluate Vietnamese image captioning models.
- Benchmark vision–language systems on Vietnamese captions (e.g., using BLEU and embedding-based cosine similarity).
- Build demo applications for image caption generation (e.g., Gradio UI) and support human evaluation (Descriptive/Non-descriptive + 1–5 relevance).
🔹 Fine-tuning
python train_csv_captioner.py
This will:
Load the annotations from (CSV delimiter |, schema image|caption).
Train a Vietnamese image captioning model with a ResNet-50 (ImageNet-pretrained) encoder and an attention-based LSTM decoder on VIC.
Save the trained checkpoint to /captioner.pt (including the model weights and vocabulary).
(Or download the pre-trained model: captioner30k.pt.)
🔹 Generating Questions
python infer_ui.py
Example:
Input image:
Output:
Một người đàn ông lướt sóng trên đại dương
(A man is surfing in the ocean)
-You can adjust in infer_ui.py:
You can adjust in infer_ui.py: --checkpoint, --img_size, --max_len, --device (and run with --web or --image_path).
📌 Citation
If you use HVU_VIC in your research, please cite:
@inproceedings{nguyen2026method,
author = {Ha Nguyen and Quyen Nguyen and Dang Do and Ngoc Hoang and Chung Mai},
title = {A Method for Building a image caption Corpora for Low-Resource Languages},
booktitle = {Proceedings of the 2026 International Symposium on Information and Communication Technology},
year = {2026},
publisher = {...},
series = {...},
address = {...},
note = {To appear}
}
❤️ Support / Funding
If you find HVU_VIC useful, please consider supporting our work.
Your contributions help us maintain the dataset, improve quality, and release new versions (cleaning, expansion, benchmarks, and tools).
🇻🇳 Donate via VietQR (scan to support)
This VietQR / NAPAS 247 code can be scanned by Vietnamese banking apps and some international payment apps that support QR bank transfers.
Bank: VietinBank (Vietnam)
Account name: NGUYEN TIEN HA
Account number: 103004492490
Branch: VietinBank CN PHU THO - HOI SO
🌍 International Support (Quick card payment)
If you are outside Vietnam, you can support this project via Buy Me a Coffee
(no PayPal account needed — pay directly with a credit/debit card):
- BuyMeACoffee: https://buymeacoffee.com/hanguyen0408
🌍 International Support (PayPal)
If you prefer PayPal, you can also support us here:
- PayPal.me: https://paypal.me/HaNguyen0408
✨ Other ways to support
- ⭐ Star this repository / dataset on Hugging Face
- 📌 Cite our paper if you use it in your research
- 🐛 Open issues / pull requests to improve the dataset and tools
- 📬 Contact / Maintainers For questions, feedback, collaborations, or issue reports related to HVU_QA, please contact: Dr. Ha Nguyen (Project Lead) Hung Vuong University, Phu Tho, Vietnam Email: nguyentienha@hvu.edu.vn
- Downloads last month
- 46