Text Generation
PEFT
Safetensors
GGUF
Transformers
English
lora
tinyllama
bubblesort
fine-tuned
conversational
llm.create_chat_completion(
messages = [
{
"role": "user",
"content": "What is the capital of France?"
}
]
)๐ซง BubbleSort-LLM
A fine-tuned TinyLLaMA-1.1B model with company-specific knowledge about Bubblesort.in and its startups.
Model Details
Model Description
BubbleSort-LLM is a LoRA fine-tuned version of TinyLLaMA designed to answer questions about Bubblesort.in, a tech company and startup ecosystem founded by Aditya Routh. The model has been trained to provide accurate information about the company's various ventures and services.
- Developed by: Aditya Routh / Bubblesort.in
- Model type: Causal Language Model (LoRA Adapter)
- Language(s) (NLP): English
- License: Apache 2.0
- Finetuned from model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
Model Sources
- Repository: adiiiii13/bubblesort-llm
- Demo: Coming Soon
About Bubblesort.in
Bubblesort.in is the parent organization for multiple startups:
| Startup | Description | Website |
|---|---|---|
| ๐ Ghar Ka Khana | Homemade food service platform | gharkakhana2026.in |
| ๐ผ GKK Intern | Internship platform for students | gkkintern.in |
| ๐ Plutoz | Social/NGO initiative for children | plutoz1.netlify.app |
| ๐จ APA Collective | Freelancing agency | apacollective.netlify.app |
Uses
Direct Use
This model can be used for:
- Answering questions about Bubblesort.in and its startups
- Customer support chatbots for Bubblesort.in services
- Information retrieval about company services
Out-of-Scope Use
- General knowledge questions (use base TinyLLaMA instead)
- Tasks requiring factual accuracy outside Bubblesort.in domain
- Production use without additional testing
How to Get Started with the Model
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
# Load model
base_model = AutoModelForCausalLM.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat-v1.0")
model = PeftModel.from_pretrained(base_model, "adiiiii13/bubblesort-llm")
tokenizer = AutoTokenizer.from_pretrained("adiiiii13/bubblesort-llm")
# Create pipeline
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
# Chat format
messages = [
{"role": "system", "content": "You are a helpful assistant for Bubblesort.in"},
{"role": "user", "content": "What is Bubblesort.in?"}
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
output = pipe(prompt, max_new_tokens=150, do_sample=True, temperature=0.7)
print(output[0]['generated_text'])
## Training Details
### Training Data
Custom dataset containing information about Bubblesort.in, its services, startups, and company details.
### Training Procedure
#### Training Hyperparameters
| Parameter | Value |
|-----------|-------|
| LoRA Rank (r) | 16 |
| LoRA Alpha | 32 |
| LoRA Dropout | 0.05 |
| Target Modules | q_proj, k_proj, v_proj, o_proj |
| Training regime | bf16 mixed precision |
## Technical Specifications
### Model Architecture and Objective
- **Architecture:** LLaMA-based transformer with LoRA adapters
- **Parameters:** ~18MB adapter weights
- **Objective:** Causal language modeling
### Compute Infrastructure
#### Hardware
- Kaggle GPU (T4/P100)
#### Software
- Transformers
- PEFT 0.18.1
- PyTorch
## Citation
```bibtex
@misc{bubblesort-llm,
author = {Aditya Routh},
title = {BubbleSort-LLM: A Fine-tuned TinyLLaMA for Bubblesort.in},
year = {2026},
publisher = {HuggingFace},
url = {https://huggingface.co/adiiiii13/bubblesort-llm}
}
Model Card Authors
Aditya Routh (@adiiiii13)
Model Card Contact
GitHub: aditya04slg
Website: adityarouth.site
Framework Versions
PEFT: 0.18.1
Transformers: 4.x
PyTorch: 2.x
Made with ๐ by Bubblesort.in
- Downloads last month
- 1
Hardware compatibility
Log In to add your hardware
We're not able to determine the quantization variants.
Model tree for adiiiii13/bubblesort-llm
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="adiiiii13/bubblesort-llm", filename="bubblesort-llm.gguf", )