HopAlFikr

HopAlFikr is a fine-tuned language model optimized for conversational AI and coding assistance.

Model Details

Intended Use

  • Conversational AI assistant
  • Code generation and explanation
  • General knowledge Q&A
  • Educational applications

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("TaimoorSiddiqui/HopAlFikr")
tokenizer = AutoTokenizer.from_pretrained("TaimoorSiddiqui/HopAlFikr")

inputs = tokenizer("Hello, how can I help you?", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0]))

Training

  • Framework: Transformers + PEFT (LoRA)
  • Hardware: Kaggle T4 GPU
  • Epochs: 3
  • Learning Rate: 2e-4

License

This model is licensed under the Apache 2.0 License.

You are free to:

  • โœ… Use commercially
  • โœ… Modify and distribute
  • โœ… Use privately
  • โœ… Patent use

Citation

@misc{hopalfikr2024,
  author = {Taimoor Siddiqui},
  title = {HopAlFikr: A Fine-tuned Language Model},
  year = {2024},
  publisher = {HuggingFace},
  url = {https://huggingface.co/TaimoorSiddiqui/HopAlFikr}
}

Acknowledgments

Built on the excellent Mistral 7B v0.3 base model.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for TaimoorSiddiqui/HopAlFikr

Finetuned
(324)
this model