The dataset viewer is not available for this dataset.
Error code: ConfigNamesError
Exception: ReadTimeout
Message: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: 39de12f7-16f1-483d-8ad7-8c0b90421994)')
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1031, in dataset_module_factory
raise e1 from None
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1004, in dataset_module_factory
).get_module()
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 632, in get_module
data_files = DataFilesDict.from_patterns(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 689, in from_patterns
else DataFilesList.from_patterns(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 592, in from_patterns
origin_metadata = _get_origin_metadata(data_files, download_config=download_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 506, in _get_origin_metadata
return thread_map(
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/tqdm/contrib/concurrent.py", line 69, in thread_map
return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/tqdm/contrib/concurrent.py", line 51, in _executor_map
return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/tqdm/std.py", line 1169, in __iter__
for obj in iterable:
^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 619, in result_iterator
yield _result_or_cancel(fs.pop())
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 317, in _result_or_cancel
return fut.result(timeout)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 59, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 485, in _get_single_origin_metadata
resolved_path = fs.resolve_path(data_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 198, in resolve_path
repo_and_revision_exist, err = self._repo_and_revision_exist(repo_type, repo_id, revision)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 125, in _repo_and_revision_exist
self._api.repo_info(
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_api.py", line 2816, in repo_info
return method(
^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_api.py", line 2673, in dataset_info
r = get_session().get(path, headers=headers, timeout=timeout, params=params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_http.py", line 96, in send
return super().send(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/requests/adapters.py", line 690, in send
raise ReadTimeout(e, request=request)
requests.exceptions.ReadTimeout: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: 39de12f7-16f1-483d-8ad7-8c0b90421994)')Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
license: mit task_categories: - question-answering - information-retrieval - text-generation language: - en tags: - rag - retrieval-augmented-generation - education - course-materials - faiss - embeddings - cse - computer-science size_categories: - 1K<n<10K
CSE Course RAG Dataset
A comprehensive dataset for Retrieval-Augmented Generation (RAG) systems containing processed Computer Science and Engineering (CSE) course materials from Ho Chi Minh City University of Technology (HCMUT). This dataset includes pre-built FAISS indices, processed course documents, raw PDFs, and converted images, ready for use in educational RAG applications.
Dataset Description
This dataset provides a complete pipeline-ready dataset for building RAG systems on educational course materials. It includes:
- Pre-built FAISS indices for fast semantic search
- Processed course data in structured JSON format
- Raw PDF documents (original course materials)
- Converted images (OCR-ready page images)
- Metadata and embeddings for retrieval and generation tasks
The dataset is designed to support research and development in educational AI systems, particularly for question-answering and information retrieval applications.
Dataset Structure
CSE_course_RAG/
βββ indices/ # Pre-built FAISS indices for semantic search
βββ processed/ # Processed course data (JSON format)
βββ raw/ # Raw PDF documents
βββ converted/ # Converted page images (OCR-ready)
βββ data/ # Additional processed data
βββ scratch/ # Temporary processing files
Supported Tasks
- Question Answering: Answer questions about course content using retrieved context
- Information Retrieval: Semantic search over course materials
- Text Generation: Generate answers based on retrieved course content
Dataset Details
Dataset Size
- Total Courses: Multiple CSE courses
- Documents: Syllabus and material documents per course
- Chunks: Pre-processed text chunks with embeddings
- Indices: FAISS indices for fast retrieval
Data Processing
The dataset has been processed through the following pipeline:
- Conversion: PDFs/Office docs β page images
- OCR: PaddleOCR text extraction
- Parsing: Structured JSON extraction (syllabus and material parsers)
- Chunking: Text chunking with overlap
- Embedding: Sentence-transformer embeddings
- Indexing: FAISS index construction
Data Fields
Processed Data (JSON):
course: Course namecourse_id: Course codeschema_version: Data schema versionslides: Array of slide objects with:page_index: Page numberchapter_num: Chapter numbersource_file: Source file pathmetadata: Processing metadataraw_text: Extracted OCR text
FAISS Indices:
- Vector embeddings for semantic search
- Metadata mappings for chunk retrieval
- Course-specific indices
Usage
Download the Dataset
from huggingface_hub import snapshot_download
# Download the entire dataset
dataset_path = snapshot_download(
repo_id="hatakekksheeshh/CSE_course_RAG",
repo_type="dataset",
local_dir="./data"
)
Or use the provided download script:
python dataset.py
Using with RAG Systems
The dataset is designed to work with the CSE Course RAG system:
from rag.query_pipeline import QueryPipeline
# Initialize pipeline with pre-built indices
pipeline = QueryPipeline(
index_dir="./data/indices",
embedding_model="sentence-transformers/all-MiniLM-L6-v2"
)
# Query the system
result = pipeline.answer(
query="What is the grading policy?",
course="Introduction_to_Computing"
)
Loading FAISS Indices
import faiss
import pickle
# Load FAISS index
index = faiss.read_index("./data/indices/course_name.index")
# Load metadata
with open("./data/indices/course_name_metadata.pkl", "rb") as f:
metadata = pickle.load(f)
Processing Raw Data
If you need to reprocess the data:
# Load processed course data
import json
with open("./data/processed/course_name/course_name.json", "r") as f:
course_data = json.load(f)
Dataset Statistics
The dataset includes:
- Multiple CSE courses covering various computer science topics
- Structured syllabus data with course information, grading policies, prerequisites
- Course materials including lecture slides and chapter content
- Pre-computed embeddings using sentence-transformers models
- FAISS indices optimized for fast similarity search
Evaluation
The dataset has been evaluated with the following metrics:
- Answer Faithfulness: +21.1% improvement with query rewriting
- Top Chunk Score: +80.9% improvement in reranker confidence
- Query-Answer Similarity: Semantic alignment between queries and answers
- Retrieval Performance: Query-Chunk similarity and reranker scores
Limitations
- The dataset contains course materials from HCMUT and may be specific to that institution's curriculum
- OCR quality depends on source document quality
- Some courses may have incomplete or missing materials
- The dataset is primarily in English
Citation
If you use this dataset in your research, please cite:
@dataset{cse_course_rag_2025,
title={CSE Course RAG Dataset},
author={Nguyen Quoc Hieu},
year={2025},
publisher={HuggingFace},
url={https://huggingface.co/datasets/hatakekksheeshh/CSE_course_RAG}
}
License
This dataset is released under the MIT License. See the LICENSE file for details.
Copyright: Β© 2025 Nguyen Quoc Hieu, Ho Chi Minh City University of Technology
Acknowledgments
- Ho Chi Minh City University of Technology (HCMUT) for providing course materials
- HuggingFace for hosting the dataset
- PaddleOCR for OCR capabilities
- sentence-transformers for embedding models
- FAISS for efficient similarity search
Note: This dataset is intended for research and educational purposes. Please respect the original course materials' copyright and use appropriately.
- Downloads last month
- 337