HopAlFikr
HopAlFikr is a fine-tuned language model optimized for conversational AI and coding assistance.
Model Details
- Base Model: Mistral 7B v0.3
- License: Apache 2.0 (Commercial use allowed)
- Fine-tuning: LoRA
- Languages: English, Urdu
- Developer: TaimoorSiddiqui
Intended Use
- Conversational AI assistant
- Code generation and explanation
- General knowledge Q&A
- Educational applications
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("TaimoorSiddiqui/HopAlFikr")
tokenizer = AutoTokenizer.from_pretrained("TaimoorSiddiqui/HopAlFikr")
inputs = tokenizer("Hello, how can I help you?", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0]))
Training
- Framework: Transformers + PEFT (LoRA)
- Hardware: Kaggle T4 GPU
- Epochs: 3
- Learning Rate: 2e-4
License
This model is licensed under the Apache 2.0 License.
You are free to:
- โ Use commercially
- โ Modify and distribute
- โ Use privately
- โ Patent use
Citation
@misc{hopalfikr2024,
author = {Taimoor Siddiqui},
title = {HopAlFikr: A Fine-tuned Language Model},
year = {2024},
publisher = {HuggingFace},
url = {https://huggingface.co/TaimoorSiddiqui/HopAlFikr}
}
Acknowledgments
Built on the excellent Mistral 7B v0.3 base model.
Model tree for TaimoorSiddiqui/HopAlFikr
Base model
mistralai/Mistral-7B-v0.3