| license: apache-2.0 | |
| This is a model for testing llama.cpp-based runtimes, the goal is to have the smallest GGUF working file possible. | |
| Generated by https://github.com/Firefox-AI/tinyllama |
| license: apache-2.0 | |
| This is a model for testing llama.cpp-based runtimes, the goal is to have the smallest GGUF working file possible. | |
| Generated by https://github.com/Firefox-AI/tinyllama |