File size: 2,944 Bytes
ce98be0
 
 
 
 
 
 
 
 
 
 
 
 
a6a73ee
ce98be0
 
 
 
 
a6a73ee
ce98be0
a6a73ee
 
 
 
 
 
 
ce98be0
a6a73ee
ce98be0
a6a73ee
 
 
ce98be0
 
 
 
 
a6a73ee
ce98be0
a6a73ee
ce98be0
a6a73ee
ce98be0
 
 
a6a73ee
 
 
ce98be0
 
 
a6a73ee
ce98be0
 
 
a6a73ee
ce98be0
 
 
a6a73ee
 
 
ce98be0
a6a73ee
 
ce98be0
a6a73ee
 
 
ce98be0
a6a73ee
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---
base_model: unsloth/gemma-2b-bnb-4bit
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/gemma-2b-bnb-4bit
- lora
- sft
- transformers
- trl
- unsloth
---

# Model Card for ScoLaM

## Model Details

### Model Description

ScoLaM is a fine-tuned language model based on the **unsloth/gemma-2b-bnb-4bit** base model. It uses Parameter-Efficient Fine-Tuning (PEFT) techniques, specifically LoRA (Low-Rank Adaptation), to enable efficient adaptation with reduced compute and storage requirements. ScoLaM is designed primarily for text-generation tasks and can be applied in domains requiring lightweight, performant language modeling.

- **Developed by:** Team Scorton 
- **Funded by:** SchoolyAI
- **Shared by:** https://github.com/scorton
- **Model type:** Transformer-based causal language model with LoRA fine-tuning
- **Language(s):** English (primarily), French (second), Spanish (Second)
- **License:** [Specify license, e.g., Apache 2.0, MIT, etc.]
- **Finetuned from model:** unsloth/gemma-2b-bnb-4bit (4-bit quantized base model)

### Model Sources

- **Repository:** hugging.co/schooly
- **Paper:** [Link to relevant publication if any]
- **Demo:** [URL to demo application if any]

## Uses

### Direct Use

ScoLaM is intended for general-purpose text generation tasks such as drafting, creative writing, summarization, or chatbot dialogue generation. It can be used directly via text-generation pipelines in Hugging Face Transformers using PEFT adapters.

### Downstream Use

ScoLaM can serve as a base for further fine-tuning on domain-specific datasets or for integration into larger NLP systems, chatbots, or AI assistants that benefit from efficient fine-tuning and inference.

### Out-of-Scope Use

- Use in highly safety-critical or sensitive applications without further validation.
- Generation of misleading, harmful, or biased content.
- Applications requiring strong factual accuracy without additional grounding.

## Bias, Risks, and Limitations

ScoLaM inherits biases present in the base model and training data. It may produce biased, harmful, or nonsensical outputs if used improperly. Its quantized 4-bit format may also affect precision in some use cases.

### Recommendations

Users should evaluate outputs carefully, especially in high-stakes scenarios. Fine-tuning or prompt engineering may be needed to mitigate undesired behavior.

## How to Get Started with the Model

```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
from peft import PeftModel

base_model = "unsloth/gemma-2b-bnb-4bit"
adapter_model = "path_or_id_to_scolam_adapter"

tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(base_model)
model = PeftModel.from_pretrained(model, adapter_model)

text_gen = pipeline("text-generation", model=model, tokenizer=tokenizer)
output = text_gen("Your prompt here", max_length=50)
print(output)