cope-a-9b-merged / README.md
cplonski's picture
Add model card
352355c verified
metadata
license: apache-2.0
base_model: google/gemma-2-9b
tags:
  - merge
  - lora
  - gemma-2
library_name: transformers

Cope-A-9B Merged Model

This model is a merged version of the Gemma-2-9B base model with the zentropi-ai/cope-a-9b LoRA adapter.

Base Model

  • Base Model: google/gemma-2-9b
  • LoRA Adapter: zentropi-ai/cope-a-9b

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("cplonski/cope-a-9b-merged")
tokenizer = AutoTokenizer.from_pretrained("cplonski/cope-a-9b-merged")

# Generate text
inputs = tokenizer("Your prompt here", return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

Model Details

  • Model Type: Causal Language Model
  • Architecture: Gemma-2
  • Parameters: ~9B
  • Merged from: Base model + LoRA adapter weights