VLM stands for Vortex Language Model, a series of transformer-based models developed by PingVortex. The models are designed for tasks such as text generation, reasoning, and instruction following. Each version of VLM is structured in three training stages for progressive refinement.
Each VLM version follows a three-stage pipeline:
K stands for Knowledge, with higher numbers representing more advanced training stages.
124M3.0tatsu-lab/alpaca and a small custom datasetElriggs/openwebtext-100krahular/simple-wikipedia418M1.0ssbuild/alpaca_gpt4 and effectiveML/ArXiv-10All models aim to reach a target training loss that signifies strong generalization ability. Training is monitored using: