The dataset viewer is not available for this dataset.
Error code: ConfigNamesError
Exception: RuntimeError
Message: Dataset scripts are no longer supported, but found Peter.py
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1207, in dataset_module_factory
raise e1 from None
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1167, in dataset_module_factory
raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}")
RuntimeError: Dataset scripts are no longer supported, but found Peter.pyNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Digital Peter
The Peter dataset can be used for reading texts from the manuscripts written by Peter the Great. The dataset annotation contain end-to-end markup for training detection and OCR models, as well as an end-to-end model for reading text from pages.
Paper is available at http://arxiv.org/abs/2103.09354
Description
Digital Peter is an educational task with a historical slant created on the basis of several AI technologies (Computer Vision, NLP, and knowledge graphs). The task was prepared jointly with the Saint Petersburg Institute of History (N.P.Lihachov mansion) of Russian Academy of Sciences, Federal Archival Agency of Russia and Russian State Archive of Ancient Acts.
A detailed description of the problem (with an immersion in the problem) can be found in detailed_description_of_the_task_en.pdf
The dataset consists of 662 full page images and 9696 annotated text files. There are 265788 symbols and approximately 50998 words.
Annotation format
The annotation is in COCO format. The annotation.json should have the following dictionaries:
annotation["categories"]- a list of dicts with a categories info (categotiy names and indexes).annotation["images"]- a list of dictionaries with a description of images, each dictionary must contain fields:file_name- name of the image file.idfor image id.
annotation["annotations"]- a list of dictioraties with a murkup information. Each dictionary stores a description for one polygon from the dataset, and must contain the following fields:image_id- the index of the image on which the polygon is located.category_id- the polygon’s category index.attributes- dict with some additional annotatioin information. In thetranslationsubdict you can find text translation for the line.segmentation- the coordinates of the polygon, a list of numbers - which are coordinate pairs x and y.
Competition
We held a competition based on Digital Peter dataset. Here is github link. Here is competition page (need to register).
- Downloads last month
- 26