petrusp commited on
Commit
a7367db
·
1 Parent(s): 32264cc

Model card

Browse files
Files changed (1) hide show
  1. README.md +149 -0
README.md ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ base_model: Qwen/Qwen3-8B
6
+ datasets:
7
+ - HuggingFaceTB/smollm-corpus
8
+ tags:
9
+ - text-generation
10
+ - transformers
11
+ - safetensors
12
+ - qwen
13
+ - climate
14
+ - planetary-boundaries
15
+ - domain-adaptation
16
+ pipeline_tag: text-generation
17
+ ---
18
+
19
+ # ClimateGPT-3-8B
20
+
21
+ ClimateGPT-3-8B is an open language model domain-adapted for climate science and the **Planetary Boundaries** framework.
22
+
23
+ ## Model details
24
+
25
+ - **Base model**: `Qwen/Qwen3-8B`
26
+ - **Model type**: Causal LM
27
+ - **Language(s)**: English
28
+ - **Context length**: 8192 tokens (SFT configuration)
29
+ - **License**: Apache-2.0
30
+ - **Release artifact**: Fully merged weights (standalone model; no adapter required)
31
+
32
+ ## Intended use
33
+
34
+ - Climate and sustainability Q&A
35
+ - Planetary Boundaries–focused education and analysis
36
+ - Drafting and summarization of climate-related content
37
+
38
+ ## Limitations
39
+
40
+ - The model may produce incorrect or outdated information.
41
+ - Training data is largely English web content; this can introduce geographic/cultural and topical biases.
42
+ - The model is not a substitute for professional scientific, medical, legal, or policy advice.
43
+
44
+ ## Training
45
+
46
+ ClimateGPT-3-8B was built in multiple stages:
47
+
48
+ ### Continued pretraining (CPT)
49
+
50
+ Starting from `Qwen/Qwen3-8B`, we performed continued pretraining on climate-focused corpora primarily derived from FineWeb-Edu (SmolLM-Corpus) using climate- and Planetary Boundaries–oriented filtering.
51
+
52
+ The data selection emphasizes climate science and Planetary Boundaries terminology and includes filtering to reduce off-topic matches from ambiguous terms.
53
+
54
+ ### Supervised fine-tuning (SFT)
55
+
56
+ We performed supervised fine-tuning using a mixture of:
57
+
58
+ - Climate instruction-following data
59
+ - Multi-turn conversations
60
+ - Safety/refusal examples
61
+ - Tool-use data
62
+ - Synthetic climate / Planetary Boundaries Q&A
63
+
64
+ ## Training data
65
+
66
+ ### Public data
67
+
68
+ - **FineWeb-Edu (via `HuggingFaceTB/smollm-corpus`)**
69
+ - Used for climate- and Planetary Boundaries–filtered continued pretraining.
70
+ - **Dataset license**: ODC-By
71
+ - Dataset page: https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus
72
+
73
+ ### Non-public / generated data
74
+
75
+ In addition to public data, the training mix includes internal and/or generated instruction data. These datasets are not redistributed with this model.
76
+
77
+ ## Evaluation
78
+
79
+ We evaluate climate-domain performance using a Planetary Boundaries evaluation suite compatible with EleutherAI’s `lm-evaluation-harness`.
80
+
81
+ A representative comparison (from this project’s Planetary Boundaries evaluation artifacts) between a ClimateGPT 8B checkpoint and the base Qwen3-8B:
82
+
83
+ | Task | Metric | ClimateGPT | Qwen3-8B |
84
+ |---|---:|---:|---:|
85
+ | `planetary_boundaries_mcq_large` | acc | 0.4422 | 0.3533 |
86
+ | `planetary_boundaries_mcq_large` | acc_norm | 0.4278 | 0.3900 |
87
+ | `planetary_boundaries_mcq_hard` | acc | 0.3467 | 0.2711 |
88
+ | `planetary_boundaries_mcq_hard` | acc_norm | 0.3800 | 0.3400 |
89
+ | `planetary_boundaries_qa_large` | exact_match | 0.9000 | 0.8467 |
90
+ | `planetary_boundaries_qa_strict_core_nolist` | exact_match | 0.6556 | 0.4889 |
91
+
92
+ ## How to use
93
+
94
+ ### Transformers
95
+
96
+ This repository contains a standalone model. You can load it directly with Transformers.
97
+
98
+ ```python
99
+ from transformers import AutoModelForCausalLM, AutoTokenizer
100
+
101
+ model_id = "Erasmus-AI/climategpt-3-8b"
102
+
103
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
104
+ model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
105
+
106
+ prompt = "Explain the Planetary Boundaries framework in simple terms."
107
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
108
+
109
+ out = model.generate(
110
+ **inputs,
111
+ max_new_tokens=512,
112
+ do_sample=True,
113
+ temperature=0.6,
114
+ top_p=0.95,
115
+ )
116
+ print(tokenizer.decode(out[0], skip_special_tokens=True))
117
+ ```
118
+
119
+ ### vLLM
120
+
121
+ This model is intended to be compatible with vLLM.
122
+
123
+ ## License
124
+
125
+ - **Model weights**: Apache-2.0
126
+ - **Base model**: `Qwen/Qwen3-8B` (Apache-2.0)
127
+
128
+ ## Attribution
129
+
130
+ If you use this model, please cite/attribute the upstream resources where appropriate:
131
+
132
+ - Base model: https://huggingface.co/Qwen/Qwen3-8B
133
+ - Training data (public portion): https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus (ODC-By)
134
+
135
+ ## Citation
136
+
137
+ If you use this model in academic work, please cite:
138
+
139
+ ```bibtex
140
+ @misc{climategpt3,
141
+ title = {ClimateGPT-3-8B},
142
+ howpublished = {\url{https://huggingface.co/Erasmus-AI/climategpt-3-8b}},
143
+ year = {2026}
144
+ }
145
+ ```
146
+
147
+ ## Contact
148
+
149
+ If you have questions, issues, or evaluation results to share, please open a discussion/issue in the repository that accompanies this release.