language:
- en
license:
- apache-2.0
- cc-by-sa-4.0
tags:
- code-generation
- AI
- Mirror
- mistral
- LLM
datasets:
- gpt-codefeedback
library_name: transformers
model_creator: Dipesh Majithia
model_name: Mirror
Mirror Model Card
Summary
Mirror is a fine-tuned large language model built on Mistral, optimized for code generation, debugging, and structured technical assistance. It has been trained on the GPT CodeFeedback dataset, enhancing its ability to provide precise, context-aware programming suggestions. While not a state-of-the-art model, Mirror demonstrates strong code understanding, refactoring capabilities, and instruction-following behavior.
The model is fine-tuned using LoRA with a focus on efficient inference and is designed to assist developers in writing clean, optimized, and well-structured code.
Mirror is available in different configurations to support various deployment environments.
Model Overview
Mirror is a causal language model based on Mistral, trained using instruction tuning on a dataset designed to enhance code review, debugging, and structured programming responses. The model is intended for:
- Code generation across multiple programming languages.
- Code optimization and refactoring suggestions.
- Explaining and debugging errors.
- Providing structured, detailed coding assistance.
LangChain Usage
For applications using LangChain, set return_full_text=True to ensure the full response is returned.
from transformers import pipeline
from langchain import PromptTemplate, LLMChain
from langchain.llms import HuggingFacePipeline
generate_code = pipeline(model="your-huggingface-username/Mirror",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
return_full_text=True)
prompt = PromptTemplate(
input_variables=["instruction"],
template="{instruction}")
hf_pipeline = HuggingFacePipeline(pipeline=generate_code)
llm_chain = LLMChain(llm=hf_pipeline, prompt=prompt)
print(llm_chain.predict(instruction="Write a Python function to check if a number is prime."))