Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowInvalid
Message:      Failed to parse string: '1537+4053' as a scalar of type int64
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 644, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2223, in cast_table_to_schema
                  arrays = [
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2224, in <listcomp>
                  cast_array_to_feature(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in <listcomp>
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2086, in cast_array_to_feature
                  return array_cast(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1797, in wrapper
                  return func(array, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1949, in array_cast
                  return array.cast(pa_type)
                File "pyarrow/array.pxi", line 996, in pyarrow.lib.Array.cast
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/compute.py", line 404, in cast
                  return call_function("cast", [arr], options, memory_pool)
                File "pyarrow/_compute.pyx", line 590, in pyarrow._compute.call_function
                File "pyarrow/_compute.pyx", line 385, in pyarrow._compute.Function.call
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Failed to parse string: '1537+4053' as a scalar of type int64
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1456, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1055, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 894, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 970, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1702, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Unnamed: 0
int64
sentence_id
int64
clause_id
int64
phrase_id
int64
word_id
string
ref
string
text
string
lemma
string
gloss
string
strong
int64
morph
string
0
1
1
1
n40001001001
MAT 1:1!1
Βίβλος
βίβλος
book
976
N-NSF
1
1
1
1
n40001001002
MAT 1:1!2
γενέσεως
γένεσις
genealogy
1,078
N-GSF
2
1
1
1
n40001001003
MAT 1:1!3
Ἰησοῦ
Ἰησοῦς
Jesus
2,424
N-GSM
3
1
1
1
n40001001004
MAT 1:1!4
χριστοῦ
Χριστός
Christ
5,547
N-GSM
4
1
1
1
n40001001005
MAT 1:1!5
υἱοῦ
υἱός
son
5,207
N-GSM
5
1
1
1
n40001001006
MAT 1:1!6
Δαυὶδ
Δαυίδ
David
1,138
N-PRI
6
1
1
1
n40001001007
MAT 1:1!7
υἱοῦ
υἱός
son
5,207
N-GSM
7
1
1
1
n40001001008
MAT 1:1!8
Ἀβραάμ.
Ἀβραάμ
Abraham
11
N-PRI
8
2
2
2
n40001002001
MAT 1:2!1
Ἀβραὰμ
Ἀβραάμ
Abraham
11
N-PRI
9
2
2
3
n40001002002
MAT 1:2!2
ἐγέννησεν
γεννάω
fathered
1,080
V-AAI-3S
10
2
2
4
n40001002003
MAT 1:2!3
τὸν
null
3,588
T-ASM
11
2
2
4
n40001002004
MAT 1:2!4
Ἰσαάκ,
Ἰσαάκ
Isaac
2,464
N-PRI
12
2
3
5
n40001002005
MAT 1:2!5
Ἰσαὰκ
Ἰσαάκ
Isaac
2,464
N-PRI
13
2
3
6
n40001002006
MAT 1:2!6
δὲ
δέ
and
1,161
CONJ
14
2
3
6
n40001002007
MAT 1:2!7
ἐγέννησεν
γεννάω
fathered
1,080
V-AAI-3S
15
2
3
7
n40001002008
MAT 1:2!8
τὸν
null
3,588
T-ASM
16
2
3
7
n40001002009
MAT 1:2!9
Ἰακώβ,
Ἰακώβ
Jacob
2,384
N-PRI
17
2
4
8
n40001002010
MAT 1:2!10
Ἰακὼβ
Ἰακώβ
Jacob
2,384
N-PRI
18
2
4
9
n40001002011
MAT 1:2!11
δὲ
δέ
and
1,161
CONJ
19
2
4
9
n40001002012
MAT 1:2!12
ἐγέννησεν
γεννάω
fathered
1,080
V-AAI-3S
20
2
4
10
n40001002013
MAT 1:2!13
τὸν
null
3,588
T-ASM
21
2
4
10
n40001002014
MAT 1:2!14
Ἰούδαν
Ἰούδας
Judah
2,455
N-ASM
22
2
4
10
n40001002015
MAT 1:2!15
καὶ
καί
and
2,532
CONJ
23
2
4
10
n40001002016
MAT 1:2!16
τοὺς
the
3,588
T-APM
24
2
4
10
n40001002017
MAT 1:2!17
ἀδελφοὺς
ἀδελφός
brothers
80
N-APM
25
2
4
10
n40001002018
MAT 1:2!18
αὐτοῦ,
αὐτός
his
846
P-GSM
26
2
5
11
n40001003001
MAT 1:3!1
Ἰούδας
Ἰούδας
Judah
2,455
N-NSM
27
2
5
12
n40001003002
MAT 1:3!2
δὲ
δέ
and
1,161
CONJ
28
2
5
12
n40001003003
MAT 1:3!3
ἐγέννησεν
γεννάω
fathered
1,080
V-AAI-3S
29
2
5
13
n40001003004
MAT 1:3!4
τὸν
null
3,588
T-ASM
30
2
5
13
n40001003005
MAT 1:3!5
Φαρὲς
Φαρές
Perez
5,329
N-PRI
31
2
5
13
n40001003006
MAT 1:3!6
καὶ
καί
and
2,532
CONJ
32
2
5
13
n40001003007
MAT 1:3!7
τὸν
null
3,588
T-ASM
33
2
5
13
n40001003008
MAT 1:3!8
Ζάρα
Ζάρα
Zerah
2,196
N-PRI
34
2
5
14
n40001003009
MAT 1:3!9
ἐκ
ἐκ
by
1,537
PREP
35
2
5
14
n40001003010
MAT 1:3!10
τῆς
null
3,588
T-GSF
36
2
5
14
n40001003011
MAT 1:3!11
Θαμάρ,
Θαμάρ
Tamar
2,283
N-PRI
37
2
6
15
n40001003012
MAT 1:3!12
Φαρὲς
Φαρές
Perez
5,329
N-PRI
38
2
6
16
n40001003013
MAT 1:3!13
δὲ
δέ
and
1,161
CONJ
39
2
6
16
n40001003014
MAT 1:3!14
ἐγέννησεν
γεννάω
fathered
1,080
V-AAI-3S
40
2
6
17
n40001003015
MAT 1:3!15
τὸν
null
3,588
T-ASM
41
2
6
17
n40001003016
MAT 1:3!16
Ἑσρώμ,
Ἑσρώμ
Hezron
2,074
N-PRI
42
2
7
18
n40001003017
MAT 1:3!17
Ἑσρὼμ
Ἑσρώμ
Hezron
2,074
N-PRI
43
2
7
19
n40001003018
MAT 1:3!18
δὲ
δέ
and
1,161
CONJ
44
2
7
19
n40001003019
MAT 1:3!19
ἐγέννησεν
γεννάω
fathered
1,080
V-AAI-3S
45
2
7
20
n40001003020
MAT 1:3!20
τὸν
null
3,588
T-ASM
46
2
7
20
n40001003021
MAT 1:3!21
Ἀράμ,
Ἀράμ
Ram
689
N-PRI
47
2
8
21
n40001004001
MAT 1:4!1
Ἀρὰμ
Ἀράμ
Ram
689
N-PRI
48
2
8
22
n40001004002
MAT 1:4!2
δὲ
δέ
and
1,161
CONJ
49
2
8
22
n40001004003
MAT 1:4!3
ἐγέννησεν
γεννάω
fathered
1,080
V-AAI-3S
50
2
8
23
n40001004004
MAT 1:4!4
τὸν
null
3,588
T-ASM
51
2
8
23
n40001004005
MAT 1:4!5
Ἀμιναδάβ,
Ἀμιναδάβ
Amminadab
284
N-PRI
52
2
9
24
n40001004006
MAT 1:4!6
Ἀμιναδὰβ
Ἀμιναδάβ
Amminadab
284
N-PRI
53
2
9
25
n40001004007
MAT 1:4!7
δὲ
δέ
and
1,161
CONJ
54
2
9
25
n40001004008
MAT 1:4!8
ἐγέννησεν
γεννάω
fathered
1,080
V-AAI-3S
55
2
9
26
n40001004009
MAT 1:4!9
τὸν
null
3,588
T-ASM
56
2
9
26
n40001004010
MAT 1:4!10
Ναασσών,
Ναασσών
Nahshon
3,476
N-PRI
57
2
10
27
n40001004011
MAT 1:4!11
Ναασσὼν
Ναασσών
Nahshon
3,476
N-PRI
58
2
10
28
n40001004012
MAT 1:4!12
δὲ
δέ
and
1,161
CONJ
59
2
10
28
n40001004013
MAT 1:4!13
ἐγέννησεν
γεννάω
fathered
1,080
V-AAI-3S
60
2
10
29
n40001004014
MAT 1:4!14
τὸν
null
3,588
T-ASM
61
2
10
29
n40001004015
MAT 1:4!15
Σαλμών,
Σαλμών
Salmon
4,533
N-PRI
62
2
11
30
n40001005001
MAT 1:5!1
Σαλμὼν
Σαλμών
Salmon
4,533
N-PRI
63
2
11
31
n40001005002
MAT 1:5!2
δὲ
δέ
and
1,161
CONJ
64
2
11
31
n40001005003
MAT 1:5!3
ἐγέννησεν
γεννάω
fathered
1,080
V-AAI-3S
65
2
11
32
n40001005004
MAT 1:5!4
τὸν
null
3,588
T-ASM
66
2
11
32
n40001005005
MAT 1:5!5
Βόες
Βόες
Boaz
1,003
N-PRI
67
2
11
33
n40001005006
MAT 1:5!6
ἐκ
ἐκ
by
1,537
PREP
68
2
11
33
n40001005007
MAT 1:5!7
τῆς
null
3,588
T-GSF
69
2
11
33
n40001005008
MAT 1:5!8
Ῥαχάβ,
Ῥαχάβ
Rahab
4,477
N-PRI
70
2
12
34
n40001005009
MAT 1:5!9
Βόες
Βόες
Boaz
1,003
N-PRI
71
2
12
35
n40001005010
MAT 1:5!10
δὲ
δέ
and
1,161
CONJ
72
2
12
35
n40001005011
MAT 1:5!11
ἐγέννησεν
γεννάω
fathered
1,080
V-AAI-3S
73
2
12
36
n40001005012
MAT 1:5!12
τὸν
null
3,588
T-ASM
74
2
12
36
n40001005013
MAT 1:5!13
Ἰωβὴδ
Ἰωβήδ
Obed
5,601
N-PRI
75
2
12
37
n40001005014
MAT 1:5!14
ἐκ
ἐκ
by
1,537
PREP
76
2
12
37
n40001005015
MAT 1:5!15
τῆς
null
3,588
T-GSF
77
2
12
37
n40001005016
MAT 1:5!16
Ῥούθ,
Ῥούθ
Ruth
4,503
N-PRI
78
2
13
38
n40001005017
MAT 1:5!17
Ἰωβὴδ
Ἰωβήδ
Obed
5,601
N-PRI
79
2
13
39
n40001005018
MAT 1:5!18
δὲ
δέ
and
1,161
CONJ
80
2
13
39
n40001005019
MAT 1:5!19
ἐγέννησεν
γεννάω
fathered
1,080
V-AAI-3S
81
2
13
40
n40001005020
MAT 1:5!20
τὸν
null
3,588
T-ASM
82
2
13
40
n40001005021
MAT 1:5!21
Ἰεσσαί,
Ἰεσσαί
Jesse
2,421
N-PRI
83
2
14
41
n40001006001
MAT 1:6!1
Ἰεσσαὶ
Ἰεσσαί
Jesse
2,421
N-PRI
84
2
14
42
n40001006002
MAT 1:6!2
δὲ
δέ
and
1,161
CONJ
85
2
14
42
n40001006003
MAT 1:6!3
ἐγέννησεν
γεννάω
fathered
1,080
V-AAI-3S
86
2
14
43
n40001006004
MAT 1:6!4
τὸν
null
3,588
T-ASM
87
2
14
43
n40001006005
MAT 1:6!5
Δαυὶδ
Δαυίδ
David
1,138
N-PRI
88
2
14
43
n40001006006
MAT 1:6!6
τὸν
the
3,588
T-ASM
89
2
14
43
n40001006007
MAT 1:6!7
βασιλέα.
βασιλεύς
king
935
N-ASM
90
3
15
44
n40001006008
MAT 1:6!8
Δαυὶδ
Δαυίδ
David
1,138
N-PRI
91
3
15
45
n40001006009
MAT 1:6!9
δὲ
δέ
and
1,161
CONJ
92
3
15
45
n40001006010
MAT 1:6!10
ἐγέννησεν
γεννάω
fathered
1,080
V-AAI-3S
93
3
15
46
n40001006011
MAT 1:6!11
τὸν
null
3,588
T-ASM
94
3
15
46
n40001006012
MAT 1:6!12
Σολομῶνα
Σολομών
Solomon
4,672
N-ASM
95
3
15
47
n40001006013
MAT 1:6!13
ἐκ
ἐκ
by
1,537
PREP
96
3
15
47
n40001006014
MAT 1:6!14
τῆς
the
3,588
T-GSF
97
3
15
47
n40001006015
MAT 1:6!15
τοῦ
the
3,588
T-GSM
98
3
15
47
n40001006016
MAT 1:6!16
Οὐρίου,
Οὐρίας
Uriah
3,774
N-GSM
99
3
16
48
n40001007001
MAT 1:7!1
Σολομὼν
Σολομών
Solomon
4,672
N-NSM
End of preview.

Macula GNT to GRASS Converter

This project provides a set of Python scripts to convert the deeply nested, hierarchical XML of the Macula Greek New Testament syntax trees into a flat, query-friendly CSV format. I called it "grass" because where Macula data is a bunch of trees, this simplification makes it more akin to grass.

The final output is a single, unified CSV file (macula_grass.csv) containing the entire New Testament, where each word is assigned a unique, sequential integer ID for its sentence, clause, and phrase-group, making it ideal for less complex queries (especially if you're familiar with the BHSA data).

The Problem

The Macula GNT data is an incredibly rich resource for detailed syntactic analysis. However, its XML structure, while precise, is difficult to use for answering straightforward linguistic questions, such as:

  • "Find all clauses that contain the preposition ἐν and the verb λέγω."
  • "Show me all the subjects of the verb ποιέω."
  • "Analyze the co-occurrence of specific words within a single clausal unit."

Performing these queries on the raw XML requires complex tree-traversal code, making simple data exploration a significant programming challenge.

The Solution: The "Grass" Format

This project solves the problem by "flattening" the syntax tree into a simple, tabular format. It processes the XML and assigns each word three crucial identifiers:

  1. sentence_id: A unique ID for each sentence.
  2. clause_id: A unique ID for each clause.
  3. phrase_id: A unique ID for each major functional constituent of a clause (e.g., the entire subject phrase, the verb phrase, an adverbial phrase).

This structure clusters words into meaningful units, making the data behave much more like the BHSA dataset and enabling powerful, simple queries. It should be noted, however, that syntax is actually far more hierarchical and this represents an oversimplification for the sake of usability.

Key Features

  • Flattens Hierarchical XML: Converts complex trees into a simple, row-per-word CSV.
  • Preserves Word Order: The output CSV maintains the exact word order of the original Greek text.
  • Intelligent Clause Grouping: It identifies the main functional parts of a clause and groups them under a shared phrase_id.
  • Fixes Broken Clauses: It intelligently re-assigns clause-linking conjunctions (like καί and δέ) to the clause they logically introduce, ensuring clausal units are continuous and not broken.
  • Unified Dataset: Processes all individual book XMLs and combines them into a single, analysis-ready CSV for the entire New Testament.

Installation

  1. Python: Requires Python 3.6 or newer.

  2. Libraries: Install the necessary Python libraries using pip. The one I needed was natsort:

pip install pandas natsort
  1. Macula Data: Download or clone the Macula Greek New Testament data. This script assumes the SBLGNT XML files are located in a directory structure like ../macula-greek/SBLGNT/nodes/.

Usage

1. File Structure

Before running, ensure your directory structure looks like this:

project_root/
├── main.py               # The main processing script
├── process_csvs.py       # The script to unify CSVs (or included in main.py)

### 2. Execution

Run the main script from your terminal. Pass the path to all the XML files as an argument. The wildcard `*` is the easiest way to do this.

```bash
python main.py "/path/to/macula-greek/SBLGNT/nodes/*"

3. Workflow Explained

When you run the command, the following process occurs:

  1. Intermediate Directory: The script first creates a directory named grass/.
  2. Book-by-Book Processing: main.py iterates through each XML file (Matthew, Mark, etc.). For each book, it performs the flattening and clause-correction logic and saves a corresponding CSV file (e.g., 40-MAT.csv) in the grass/ directory.
  3. Automatic Unification: After all XML files have been processed, the script automatically calls the process_csvs function.
  4. ID Renumbering & Final Output: This function reads all the individual CSV files from the grass/ directory, combines them in biblical order, replaces the original unit IDs with clean, sequential integer IDs, and writes the final, complete dataset to macula_grass.csv in the root of your project folder.

Output Format

The final macula_grass.csv file will contain the following columns:

Column Description
sentence_id A unique integer ID for each sentence.
clause_id A unique integer ID for each clause. Words in the same clause share this ID.
phrase_id A unique integer ID for each major functional part of a clause (e.g., the subject, the object, etc.).
word_id The original xml:id from the Macula data (e.g., n40001001001).
ref The precise biblical reference for the word (e.g., MAT 1:1!1).
text The Greek word as it appears in the text.
lemma The dictionary lemma of the Greek word.
gloss The English gloss of the word's lemma.
strong The Strong's number for the word's lemma.
morph The detailed morphological code (e.g., N-NSM, V-AAI-3S).

License

Code: MIT License

The code in this repository is licensed under the MIT License. See the LICENSE file for details.

Data: CC BY 4.0

The data in this repository is licensed under the Creative Commons Attribution 4.0 International (CC BY 4.0) License. It is wholly derived from the Macula Greek Linguistic Datasets, which is also licensed under CC BY 4.0.

This is a human-readable summary of (and not a substitute for) the license.

You are free to:

  • Share — copy and redistribute the material in any medium or format
  • Adapt — remix, transform, and build upon the material for any purpose, even commercially.

The licensor cannot revoke these freedoms as long as you follow the license terms.

Under the following terms:

  • Attribution — You must attribute the work as follows: "MACULA Greek Linguistic Datasets, available at https://github.com/Clear-Bible/macula-greek/". You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.

No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.

Downloads last month
10