How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf adiiiii13/bubblesort-llm
# Run inference directly in the terminal:
llama-cli -hf adiiiii13/bubblesort-llm
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf adiiiii13/bubblesort-llm
# Run inference directly in the terminal:
llama-cli -hf adiiiii13/bubblesort-llm
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf adiiiii13/bubblesort-llm
# Run inference directly in the terminal:
./llama-cli -hf adiiiii13/bubblesort-llm
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf adiiiii13/bubblesort-llm
# Run inference directly in the terminal:
./build/bin/llama-cli -hf adiiiii13/bubblesort-llm
Use Docker
docker model run hf.co/adiiiii13/bubblesort-llm
Quick Links

๐Ÿซง BubbleSort-LLM

A fine-tuned TinyLLaMA-1.1B model with company-specific knowledge about Bubblesort.in and its startups.

Model Details

Model Description

BubbleSort-LLM is a LoRA fine-tuned version of TinyLLaMA designed to answer questions about Bubblesort.in, a tech company and startup ecosystem founded by Aditya Routh. The model has been trained to provide accurate information about the company's various ventures and services.

  • Developed by: Aditya Routh / Bubblesort.in
  • Model type: Causal Language Model (LoRA Adapter)
  • Language(s) (NLP): English
  • License: Apache 2.0
  • Finetuned from model: TinyLlama/TinyLlama-1.1B-Chat-v1.0

Model Sources

About Bubblesort.in

Bubblesort.in is the parent organization for multiple startups:

Startup Description Website
๐Ÿ› Ghar Ka Khana Homemade food service platform gharkakhana2026.in
๐Ÿ’ผ GKK Intern Internship platform for students gkkintern.in
๐Ÿ’š Plutoz Social/NGO initiative for children plutoz1.netlify.app
๐ŸŽจ APA Collective Freelancing agency apacollective.netlify.app

Uses

Direct Use

This model can be used for:

  • Answering questions about Bubblesort.in and its startups
  • Customer support chatbots for Bubblesort.in services
  • Information retrieval about company services

Out-of-Scope Use

  • General knowledge questions (use base TinyLLaMA instead)
  • Tasks requiring factual accuracy outside Bubblesort.in domain
  • Production use without additional testing

How to Get Started with the Model

from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline

# Load model
base_model = AutoModelForCausalLM.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat-v1.0")
model = PeftModel.from_pretrained(base_model, "adiiiii13/bubblesort-llm")
tokenizer = AutoTokenizer.from_pretrained("adiiiii13/bubblesort-llm")

# Create pipeline
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)

# Chat format
messages = [
    {"role": "system", "content": "You are a helpful assistant for Bubblesort.in"},
    {"role": "user", "content": "What is Bubblesort.in?"}
]

prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
output = pipe(prompt, max_new_tokens=150, do_sample=True, temperature=0.7)
print(output[0]['generated_text'])

## Training Details

### Training Data

Custom dataset containing information about Bubblesort.in, its services, startups, and company details.

### Training Procedure

#### Training Hyperparameters

| Parameter | Value |
|-----------|-------|
| LoRA Rank (r) | 16 |
| LoRA Alpha | 32 |
| LoRA Dropout | 0.05 |
| Target Modules | q_proj, k_proj, v_proj, o_proj |
| Training regime | bf16 mixed precision |

## Technical Specifications

### Model Architecture and Objective

- **Architecture:** LLaMA-based transformer with LoRA adapters
- **Parameters:** ~18MB adapter weights
- **Objective:** Causal language modeling

### Compute Infrastructure

#### Hardware

- Kaggle GPU (T4/P100)

#### Software

- Transformers
- PEFT 0.18.1
- PyTorch

## Citation

```bibtex
@misc{bubblesort-llm,
  author = {Aditya Routh},
  title = {BubbleSort-LLM: A Fine-tuned TinyLLaMA for Bubblesort.in},
  year = {2026},
  publisher = {HuggingFace},
  url = {https://huggingface.co/adiiiii13/bubblesort-llm}
}

Model Card Authors
Aditya Routh (@adiiiii13)

Model Card Contact
GitHub: aditya04slg
Website: adityarouth.site
Framework Versions
PEFT: 0.18.1
Transformers: 4.x
PyTorch: 2.x
Made with ๐Ÿ’œ by Bubblesort.in
Downloads last month
1
GGUF
Model size
1B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for adiiiii13/bubblesort-llm

Adapter
(1492)
this model

Space using adiiiii13/bubblesort-llm 1