Uploaded model

  • Developed by: qingy2024
  • License: apache-2.0
  • Finetuned from model : unsloth/gemma-2-2b-bnb-4bit

Note: This model uses a custom chat template:

Below is the original text. Please rewrite it to correct any grammatical errors if any, improve clarity, and enhance overall readability.

### Original Text:
{PROMPT HERE}

### Corrected Text:
{MODEL'S OUTPUT HERE}

I would recommend a temperature of 0.0 and repeat penalty 1.0 for this model to get optimal results.

Downloads last month
4
GGUF
Model size
3B params
Architecture
gemma2
Hardware compatibility
Log In to add your hardware

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for qingy2024/GRMR-2B-Instruct-GGUF

Quantized
(61)
this model

Collection including qingy2024/GRMR-2B-Instruct-GGUF