scolam-instruct / README.md
bacelyy's picture
Update README.md
a6a73ee verified
metadata
base_model: unsloth/gemma-2b-bnb-4bit
library_name: peft
pipeline_tag: text-generation
tags:
  - base_model:adapter:unsloth/gemma-2b-bnb-4bit
  - lora
  - sft
  - transformers
  - trl
  - unsloth

Model Card for ScoLaM

Model Details

Model Description

ScoLaM is a fine-tuned language model based on the unsloth/gemma-2b-bnb-4bit base model. It uses Parameter-Efficient Fine-Tuning (PEFT) techniques, specifically LoRA (Low-Rank Adaptation), to enable efficient adaptation with reduced compute and storage requirements. ScoLaM is designed primarily for text-generation tasks and can be applied in domains requiring lightweight, performant language modeling.

  • Developed by: Team Scorton
  • Funded by: SchoolyAI
  • Shared by: https://github.com/scorton
  • Model type: Transformer-based causal language model with LoRA fine-tuning
  • Language(s): English (primarily), French (second), Spanish (Second)
  • License: [Specify license, e.g., Apache 2.0, MIT, etc.]
  • Finetuned from model: unsloth/gemma-2b-bnb-4bit (4-bit quantized base model)

Model Sources

  • Repository: hugging.co/schooly
  • Paper: [Link to relevant publication if any]
  • Demo: [URL to demo application if any]

Uses

Direct Use

ScoLaM is intended for general-purpose text generation tasks such as drafting, creative writing, summarization, or chatbot dialogue generation. It can be used directly via text-generation pipelines in Hugging Face Transformers using PEFT adapters.

Downstream Use

ScoLaM can serve as a base for further fine-tuning on domain-specific datasets or for integration into larger NLP systems, chatbots, or AI assistants that benefit from efficient fine-tuning and inference.

Out-of-Scope Use

  • Use in highly safety-critical or sensitive applications without further validation.
  • Generation of misleading, harmful, or biased content.
  • Applications requiring strong factual accuracy without additional grounding.

Bias, Risks, and Limitations

ScoLaM inherits biases present in the base model and training data. It may produce biased, harmful, or nonsensical outputs if used improperly. Its quantized 4-bit format may also affect precision in some use cases.

Recommendations

Users should evaluate outputs carefully, especially in high-stakes scenarios. Fine-tuning or prompt engineering may be needed to mitigate undesired behavior.

How to Get Started with the Model

from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
from peft import PeftModel

base_model = "unsloth/gemma-2b-bnb-4bit"
adapter_model = "path_or_id_to_scolam_adapter"

tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(base_model)
model = PeftModel.from_pretrained(model, adapter_model)

text_gen = pipeline("text-generation", model=model, tokenizer=tokenizer)
output = text_gen("Your prompt here", max_length=50)
print(output)