daVinci-origin-7B

daVinci-origin-7B is a fully transparent, 7-billion parameter foundation model trained from scratch. It serves as a "clean-room" baseline for the research paper Data Darwinism -- Part1: Unlocking the Value of Scientific Data for Pre-training.

Unlike most open-source models, daVinci-origin-7B was explicitly trained on a dataset strictly excluding scientific content (books and research papers). This unique design allows researchers to unambiguously attribute performance gains to specific domain data injection strategies during continued pre-training.

Downloads last month
3
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for GAIR/daVinci-origin-7B

Quantizations
2 models