robzilla1738 commited on
Commit
5acff05
·
verified ·
1 Parent(s): 81d5104

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -17,7 +17,7 @@ So we started training our own.
17
 
18
  ## What we're working on
19
 
20
- We're fine-tuning Bible-specialized models from open-weight bases (Qwen 3.5-9B right now). Two-stage pipeline: continued pretraining on public-domain theological corpora (Calvin, Barnes, Pulpit Commentary, Keil-Delitzsch, creeds and confessions), then QLoRA instruction tuning on 50,000+ supervised examples.
21
 
22
  Those examples come from 24 synthetic data generators we wrote, each targeting a different slice of biblical scholarship. Verse lookup, passage exposition, Hebrew and Greek exegesis, cross-references, doctrinal Q&A, patristic readings, creedal analysis, multi-tradition comparison. All of it grounded in the Berean Standard Bible with Hebrew morphology, Greek lexicon data, and Strong's numbers across the full 31,102-verse canon.
23
 
 
17
 
18
  ## What we're working on
19
 
20
+ We're fine-tuning Bible-specialized models from open-weight bases (Qwen 2.5 7B right now). Two-stage pipeline: continued pretraining on public-domain theological corpora (Calvin, Barnes, Pulpit Commentary, Keil-Delitzsch, creeds and confessions), then QLoRA instruction tuning on 50,000+ supervised examples.
21
 
22
  Those examples come from 24 synthetic data generators we wrote, each targeting a different slice of biblical scholarship. Verse lookup, passage exposition, Hebrew and Greek exegesis, cross-references, doctrinal Q&A, patristic readings, creedal analysis, multi-tradition comparison. All of it grounded in the Berean Standard Bible with Hebrew morphology, Greek lexicon data, and Strong's numbers across the full 31,102-verse canon.
23