LoRA Climate Model
Model Overview
Model Name: LoRA Climate
Developed by: Abdul Sittar
Model Type: Text Generation (PEFT, LoRA)
Frameworks: Hugging Face Transformers, PEFT, Safetensors
Languages: English
License: Apache 2.0
This model is a LoRA-finetuned version of LLaMA2 7B, adapted for climate-related conversational tasks. It supports safe and efficient text generation while keeping the base model frozen and training only LoRA adapters.
Model Description
This model is intended for climate-related text generation, conversational tasks, and research purposes. It was trained using LoRA adapters on climate datasets and is compatible with Hugging Face Transformers for inference.
- LoRA adapters: Low-rank adaptation for efficient fine-tuning
- Base model: LLaMA2 7B
- Model weights format: Safetensors
- Intended use: Research, simulation, conversational AI for climate domain
Dataset Used
This model was trained using the Social Graph Inference Reddit dataset:
DOI / Link: https://zenodo.org/records/18082502
Authors/Creators:
- Sittar, Abdul
- Guček, Alenka
- Češnovar, Miha
Description:
A large-scale, empirically grounded dataset from Reddit to support agent-based social simulations. Includes:
- 33 technology-focused agents
- 14 climate-focused agents
- 7 COVID-related agents
- Each domain includes over one million posts and comments
The dataset defines agent categories, derives inter-agent relationships, and builds directed, weighted networks reflecting real user interactions.
License
This model is released under the Apache 2.0 License, which allows:
- Commercial and non-commercial use
- Modification and redistribution
- Requires attribution to the original author
Model Files
Included files:
adapter_model.safetensors– LoRA adapter weightstokenizer.model– Tokenizer modeltokenizer.json– Tokenizer JSON configadapter_config.json– LoRA configurationtokenizer_config.json– Tokenizer configurationspecial_tokens_map.json– Special tokens mappingchat_template.jinja– Conversation template for inferenceREADME.md– Model card and instructions
All large binaries are tracked via Git LFS.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "AbdulSittar/llama2-lora-climate"
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Load base model with LoRA adapters
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
model.eval()
# Generate text
prompt = "Climate change impacts on renewable energy:"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 3