Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: ArrowNotImplementedError
Message: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
writer.write_table(table)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 583, in write_table
self._build_writer(inferred_schema=pa_table.schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer
self.pa_writer = self._WRITER_CLASS(self.stream, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
self.writer = _parquet.ParquetWriter(
File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2027, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 602, in finalize
self._build_writer(self.schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer
self.pa_writer = self._WRITER_CLASS(self.stream, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
self.writer = _parquet.ParquetWriter(
File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
_data_files
list | _fingerprint
string | _format_columns
sequence | _format_kwargs
dict | _format_type
null | _output_all_columns
bool | _split
null |
|---|---|---|---|---|---|---|
[
{
"filename": "data-00000-of-00001.arrow"
}
] |
815143f2b6b42d41
|
[
"answers.answer_start",
"answers.text",
"context",
"feat_answer_category",
"feat_answer_end",
"feat_answer_id",
"feat_document_id",
"feat_file_name",
"feat_question_id",
"question"
] |
{}
| null | false
| null |
AutoTrain Dataset for project: my-qa
Dataset Description
This dataset has been automatically processed by AutoTrain for project my-qa.
Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
Data Instances
A sample from this dataset looks as follows:
[
{
"context": "Candidates, whose selection is recommended by the DRC and approved by Chairman Senate, shall be offered admission and advised to deposit the prescribed fee. After depositing the fee, he/she shall be designated as \ufffdPhD Scholar (PS)\ufffd. For all purposes, the date of registration of a PS shall be the date on which he/she has deposited the fee in the institute. After registration, Research Advisory Committees (RACs) for the individual PS shall be constituted by the concerned Head of the Department in consultation with the Sup",
"question": "Who is PS?",
"answers.text": [
"PhD Scholar"
],
"answers.answer_start": [
214
],
"feat_answer_id": [
955042
],
"feat_document_id": [
1586827
],
"feat_question_id": [
1066405
],
"feat_answer_end": [
225
],
"feat_answer_category": [
null
],
"feat_file_name": [
null
]
},
{
"context": " Indian Council of Cultural Relations, Govt. of India, are eligible to apply provided that they possess the same minimum qualifications as Master\ufffds Degree in Engineering/Technology in the relevant area of research along with a Bachelor\ufffds Degree in an appropriate branch of Engineering/Technology with first-class or minimum 60% marks (or CGPA of 6.5 on a 10-point scale or equivalent) at Master\ufffds and Bachelor\ufffds level. Master\ufffds Degree in an appropriate branch of Science/Humanities/Social Sciences/Management with a first-class or minimum 60% marks (or CGPA of 6.5 on a 10-point scale or equivalent) or equivalen",
"question": "What class is required for Foreign National candidates in Ph.D.?",
"answers.text": [
" first-class"
],
"answers.answer_start": [
409
],
"feat_answer_id": [
955615
],
"feat_document_id": [
1586816
],
"feat_question_id": [
1066345
],
"feat_answer_end": [
421
],
"feat_answer_category": [
null
],
"feat_file_name": [
null
]
}
]
Dataset Fields
The dataset has the following fields (also called "features"):
{
"context": "Value(dtype='string', id=None)",
"question": "Value(dtype='string', id=None)",
"answers.text": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"answers.answer_start": "Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None)",
"feat_answer_id": "Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)",
"feat_document_id": "Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)",
"feat_question_id": "Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)",
"feat_answer_end": "Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)",
"feat_answer_category": "Sequence(feature=Value(dtype='null', id=None), length=-1, id=None)",
"feat_file_name": "Sequence(feature=Value(dtype='null', id=None), length=-1, id=None)"
}
Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
|---|---|
| train | 63 |
| valid | 16 |
- Downloads last month
- 1