Text Generation
GGUF
English
quantized
1b
llama-cpp
imatrix
conversational
How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf MrDevCoder01/TrainedModels
# Run inference directly in the terminal:
llama-cli -hf MrDevCoder01/TrainedModels
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf MrDevCoder01/TrainedModels
# Run inference directly in the terminal:
llama-cli -hf MrDevCoder01/TrainedModels
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf MrDevCoder01/TrainedModels
# Run inference directly in the terminal:
./llama-cli -hf MrDevCoder01/TrainedModels
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf MrDevCoder01/TrainedModels
# Run inference directly in the terminal:
./build/bin/llama-cli -hf MrDevCoder01/TrainedModels
Use Docker
docker model run hf.co/MrDevCoder01/TrainedModels
Quick Links

PT1S-1B-Q8.gguf

This model is a 1-billion parameter text generation model trained on a high-quality mixture of synthetic and web-crawled data. It is optimized for efficiency and performance in a small footprint.

Model Details

Training Information

The model was trained on a curated blend of:

  1. Cosmopedia: A large-scale synthetic dataset designed to provide high-quality educational content across various domains.
  2. Falcon RefinedWeb: A massive, filtered web dataset that provides broad world knowledge and linguistic diversity.

This combination allows the model to have both structured knowledge from synthetic sources and a natural "web-aware" conversational style.

Usage

llama.cpp

You can use this model with llama.cpp by running:

./main -m PT1S-1B-Q8.gguf -p "Once upon a time," -n 128

Python (via llama-cpp-python)

from llama_cpp import Llama

llm = Llama(model_path="./PT1S-1B-Q8.gguf")
output = llm("Q: What is the importance of cosmopedia dataset? A:", max_tokens=100)
print(output)

Intended Use

This model is ideal for:

  • Lightweight text generation tasks.
  • Educational applications.
  • On-device inference where memory is limited.
  • Research into small language models (SLMs).

Limitations and Bias

While trained on filtered data, small models may still exhibit biases or generate incorrect information (hallucinations). Users should always verify the output of the model for critical applications.

Downloads last month
194
GGUF
Model size
1B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Datasets used to train MrDevCoder01/TrainedModels