LoRA Technology Model

Model Overview

Model Name: LoRA Technology
Developed by: Abdul Sittar
Model Type: Text Generation (PEFT, LoRA)
Frameworks: Hugging Face Transformers, PEFT, Safetensors
Languages: English
License: Apache 2.0

This model is a LoRA-finetuned version of LLaMA2 7B, adapted for technology-related conversational tasks. It supports safe and efficient text generation while keeping the base model frozen and training only LoRA adapters.


Model Card

Model Description

This model is intended for technology-related text generation, conversational tasks, and research purposes. It was trained using LoRA adapters on technology domain datasets and is compatible with Hugging Face Transformers for inference.

  • LoRA adapters: Low-rank adaptation for efficient fine-tuning
  • Base model: LLaMA2 7B
  • Model weights format: Safetensors
  • Intended use: Research, simulation, conversational AI for technology domain

License

This model is released under the Apache 2.0 License, which allows:

  • Commercial and non-commercial use
  • Modification and redistribution
  • Requires attribution to the original author

Apache 2.0 License Details


Model Files

The following files are included in this repository:

  • adapter_model.safetensors – LoRA adapter weights
  • tokenizer.model – Tokenizer model
  • tokenizer.json – Tokenizer JSON config
  • adapter_config.json – LoRA configuration
  • tokenizer_config.json – Tokenizer configuration
  • special_tokens_map.json – Special tokens mapping
  • chat_template.jinja – Conversation template for inference
  • README.md – Model card and instructions

All large binaries are tracked via Git LFS.


Dataset Used

This model was trained using the Social Graph Inference Reddit dataset:

DOI / Link: https://zenodo.org/records/18082502

Authors/Creators:

  • Sittar, Abdul
  • Guček, Alenka
  • Češnovar, Miha

Description:
A large-scale, empirically grounded dataset from Reddit to support agent-based social simulations. Includes:

  • 33 technology-focused agents
  • 14 climate-focused agents
  • 7 COVID-related agents
  • Each domain includes over one million posts and comments

The dataset defines agent categories, derives inter-agent relationships, and builds directed, weighted networks reflecting real user interactions.


Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_name = "AbdulSittar/llama2-lora-technology"

# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)

# Load base model with LoRA adapters
tokenizer = AutoTokenizer.from_pretrained(os.path.join(repo_path, "configs"))
model = AutoModelForCausalLM.from_pretrained(repo_path, device_map="auto")

model.eval()

# Generate text
prompt = "Latest trends in AI and machine learning:"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
17
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support