Run inference with Axon Pico.
Run inference with the oscar128372/Axon-Nano-6M model.
Try i3-80m, a SOTA efficient training LM arhitecture.