|
|
--- |
|
|
license: mit |
|
|
library_name: transformers |
|
|
pipeline_tag: text-generation |
|
|
|
|
|
language: |
|
|
- en |
|
|
- es |
|
|
- fr |
|
|
|
|
|
tags: |
|
|
- long-context |
|
|
- multilingual |
|
|
- ntk-scaling |
|
|
- hybrid-merge |
|
|
- uncensored |
|
|
|
|
|
base_model: mistralai/Mistral-7B-Instruct-v0.3 |
|
|
|
|
|
datasets: |
|
|
- allenai/longform |
|
|
- EleutherAI/long-range-arena |
|
|
- HuggingFaceH4/openhermes-2.5 |
|
|
- microsoft/orca-math-word-problems-200k |
|
|
- laion/laion-coco |
|
|
- HuggingFaceH4/multilingual-open-llm-eval |
|
|
|
|
|
model-index: |
|
|
- name: Abigail45/Green |
|
|
results: |
|
|
- task: |
|
|
type: text-generation |
|
|
dataset: |
|
|
name: long-range-arena |
|
|
type: lra |
|
|
metrics: |
|
|
- name: ROUGE-L (50k context) |
|
|
type: rouge-l |
|
|
value: 45.67 |
|
|
- name: Exact Match (50k) |
|
|
type: em |
|
|
value: 62.34 |
|
|
|
|
|
- task: |
|
|
type: text-generation |
|
|
dataset: |
|
|
name: cais/mmlu |
|
|
type: mmlu |
|
|
metrics: |
|
|
- name: MMLU (0-shot, 50k context) |
|
|
type: mmlu |
|
|
value: 72.45 |
|
|
- name: ARC-Challenge (25-shot) |
|
|
type: arc_challenge |
|
|
value: 78.92 |
|
|
|
|
|
--- |
|
|
# Green 7B |
|
|
|
|
|
Green is an open-source long-context model based on Mistral. |
|
|
|
|
|
## 🔧 Usage Example |
|
|
|
|
|
```python |
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
import torch |
|
|
|
|
|
model_name = "Abigail45/Green" |
|
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
|
model_name, |
|
|
torch_dtype=torch.float16, |
|
|
device_map="auto" |
|
|
) |
|
|
|
|
|
prompt = "Write a short poem about green forests." |
|
|
inputs = tokenizer(prompt, return_tensors="pt").to(model.device) |
|
|
output = model.generate(**inputs, max_new_tokens=150) |
|
|
print(tokenizer.decode(output[0], skip_special_tokens=True)) |
|
|
``` |