Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf Sweaterdog/Andy-4-preview-rough:# Run inference directly in the terminal:
llama-cli -hf Sweaterdog/Andy-4-preview-rough:Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf Sweaterdog/Andy-4-preview-rough:# Run inference directly in the terminal:
./llama-cli -hf Sweaterdog/Andy-4-preview-rough:Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf Sweaterdog/Andy-4-preview-rough:# Run inference directly in the terminal:
./build/bin/llama-cli -hf Sweaterdog/Andy-4-preview-rough:Use Docker
docker model run hf.co/Sweaterdog/Andy-4-preview-rough:🌵Andy-4-preview-rough🧑💻
This model is NOT finished, it is the first part of the Andy-4-preview fine tune
Why?
Andy-4 has the potential to be very very good. By making a preview of it, and taking in user information, we can make the full model of Andy-4 even better.
How?
Andy-4-preview-rough is the first part of a fine tune of a total of 6 epochs, of which 2 were a heavier training regime. This is that model.
The modelfile for Andy-4-preview-rough at the moment has been copy-pasted from Andy-3.6, it may not work as best as it could. If you find another, better modelfile, please open a PR for it, as well as describe how its better.
-- Thats it for this repo, I didn't wanna put a whole lot of effort into something that would be around for like a week or two ;p
Example codeblock from Andy-4-preview-rough (From the Q4_K_M version):
// Build a small, 5x7 starter cabin with oak planks, including a door and a wooden roof
const buildCabin = async (bot) => {
const position = world.getPosition(bot);
const startX = Math.floor(position.x);
const startY = Math.floor(position.y);
const startZ = Math.floor(position.z);
// Build the walls
for (let x = 0; x < 5; x++) {
for (let y = 0; y < 4; y++) {
await skills.placeBlock(bot, 'oak_planks', startX + x, startY + y, startZ);
await skills.placeBlock(bot, 'oak_planks', startX + x, startY + y, startZ + 6);
}
}
for (let z = 0; z < 7; z++) {
for (let y = 0; y < 4; y++) {
await skills.placeBlock(bot, 'oak_planks', startX, startY + y, startZ + z);
await skills.placeBlock(bot, 'oak_planks', startX + 4, startY + y, startZ + z);
}
}
// Add a door in the middle of one side
await skills.placeBlock(bot, 'oak_door', startX + 2, startY, startZ);
// Build the roof
for (let x = 0; x < 5; x++) {
for (let z = 0; z < 7; z++) {
await skills.placeBlock(bot, 'oak_planks', startX + x, startY + 4, startZ + z);
}
}
// Add a wooden roof on top
await skills.placeBlock(bot, 'oak_planks', startX, startY + 5, startZ);
await skills.placeBlock(bot, 'oak_planks', startX + 1, startY + 5, startZ);
await skills.placeBlock(bot, 'oak_planks', startX + 2, startY + 5, startZ);
await skills.placeBlock(bot, 'oak_planks', startX + 3, startY + 5, startZ);
await skills.placeBlock(bot, 'oak_planks', startX + 4, startY + 5, startZ);
for (let z = 1; z < 6; z++) {
await skills.placeBlock(bot, 'oak_planks', startX, startY + 5, startZ + z);
await skills.placeBlock(bot, 'oak_door', startX + 2, startY + 4, startZ + z);
}
};
await buildCabin(bot);
- Downloads last month
- 129
2-bit
3-bit
4-bit
5-bit
8-bit
16-bit
Model tree for Sweaterdog/Andy-4-preview-rough
Base model
deepseek-ai/DeepSeek-R1-Distill-Llama-8B
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf Sweaterdog/Andy-4-preview-rough:# Run inference directly in the terminal: llama-cli -hf Sweaterdog/Andy-4-preview-rough: