Update config.json
#1
by jkeisling - opened
The tokenizer vocab size for CodeLlama 13b, 7b got expanded w/ infill tokens (see research paper pg 4). I checked and the new vocab size is 32,016. Inference works fine w/ the incorrect count but PEFT training requires the vocab size to be right. Edited config locally manually and training works now
Thanks yeah, I've already fixed it
TheBloke changed pull request status to closed
oh shit I hadn't done this one for some reason
TheBloke changed pull request status to open
TheBloke changed pull request status to merged
Thanks!