File size: 9,690 Bytes
81148b8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 |
---
library_name: peft
base_model: codellama/CodeLlama-7b-Instruct-hf
tags:
- terraform
- terraform-configuration
- infrastructure-as-code
- iac
- hashicorp
- codellama
- lora
- qlora
- peft
- code-generation
- devops
- cloud
- aws
- azure
- gcp
- multi-cloud
- automation
- configuration-management
- cloud-infrastructure
license: apache-2.0
language:
- en
pipeline_tag: text-generation
---
# terraform-cloud-codellama-7b
**RECOMMENDED MODEL** - An advanced LoRA fine-tuned model for comprehensive Terraform infrastructure-as-code generation, supporting multiple cloud providers (AWS, Azure, GCP). This model generates Terraform configurations, HCL code, and multi-cloud infrastructure automation scripts.
## Model Description
This is the **enhanced model** - an advanced version of terraform-codellama-7b that has been additionally trained on AWS, Azure, and GCP public documentation. It provides superior performance for multi-cloud Terraform development with deep understanding of cloud provider-specific resources and best practices.
### Key Features
- **Multi-Cloud Support**: Trained on AWS, Azure, and GCP documentation
- **Enhanced Performance**: Superior to the base terraform-codellama-7b model
- **Production Ready**: Optimized for real-world multi-cloud infrastructure development
- **Comprehensive Coverage**: Handles complex cloud provider-specific configurations
- **Efficient Training**: Uses QLoRA (4-bit quantization + LoRA) for memory efficiency
## Model Details
- **Developed by**: Rafi Al Attrach, Patrick Schmitt, Nan Wu, Helena Schneider, Stefania Saju (TUM + IBM Research Project)
- **Model type**: LoRA fine-tuned CodeLlama (Enhanced)
- **Language(s)**: English
- **License**: Apache 2.0
- **Finetuned from**: [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf)
- **Training method**: QLoRA (4-bit quantization + LoRA)
- **Base Model**: Built on [rafiaa/terraform-codellama-7b](https://huggingface.co/rafiaa/terraform-codellama-7b)
### Technical Specifications
- **Base Model**: CodeLlama-7b-Instruct-hf
- **LoRA Rank**: 64
- **LoRA Alpha**: 16
- **Target Modules**: q_proj, v_proj
- **Training Epochs**: 3 (Stage 1) + Additional training (Stage 2)
- **Max Sequence Length**: 512
- **Quantization**: 4-bit (fp4)
## Uses
### Direct Use
This model is designed for:
- **Multi-cloud Terraform development**
- **AWS resource configuration** (EC2, S3, RDS, Lambda, etc.)
- **Azure resource management** (Virtual Machines, Storage Accounts, App Services, etc.)
- **GCP resource deployment** (Compute Engine, Cloud Storage, Cloud SQL, etc.)
- **Complex infrastructure orchestration**
- **Cloud provider-specific best practices**
### Example Use Cases
```python
# Generate AWS multi-service infrastructure
prompt = "Create a Terraform configuration for an AWS application with VPC, EC2, RDS, and S3"
```
```python
# Generate Azure App Service with database
prompt = "Create a Terraform configuration for an Azure App Service with PostgreSQL database"
```
```python
# Generate GCP Kubernetes cluster
prompt = "Create a Terraform configuration for a GCP GKE cluster with node pools"
```
```python
# Generate multi-cloud setup
prompt = "Create a Terraform configuration for a hybrid cloud setup using AWS and Azure"
```
## How to Get Started
### Installation
```bash
pip install transformers torch peft accelerate bitsandbytes
```
### Loading the Model
#### GPU Usage (Recommended)
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch
# Load base model with 4-bit quantization (GPU)
base_model = "codellama/CodeLlama-7b-Instruct-hf"
model = AutoModelForCausalLM.from_pretrained(
base_model,
load_in_4bit=True,
torch_dtype=torch.float16,
device_map="auto"
)
# Load LoRA adapter
model = PeftModel.from_pretrained(model, "rafiaa/terraform-cloud-codellama-7b")
tokenizer = AutoTokenizer.from_pretrained(base_model)
# Set pad token
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
```
#### CPU Usage (Alternative)
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch
# Load base model (CPU compatible)
base_model = "codellama/CodeLlama-7b-Instruct-hf"
model = AutoModelForCausalLM.from_pretrained(
base_model,
torch_dtype=torch.float32,
device_map="cpu"
)
# Load LoRA adapter
model = PeftModel.from_pretrained(model, "rafiaa/terraform-cloud-codellama-7b")
tokenizer = AutoTokenizer.from_pretrained(base_model)
# Set pad token
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
```
### Usage Example
```python
def generate_terraform(prompt, max_length=512):
inputs = tokenizer(prompt, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(
**inputs,
max_length=max_length,
temperature=0.7,
do_sample=True,
pad_token_id=tokenizer.eos_token_id
)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
# Example: Multi-cloud infrastructure
prompt = """
Create a Terraform configuration for a multi-cloud setup:
- AWS: VPC with public/private subnets, EC2 instances
- Azure: Storage account and App Service
- GCP: Cloud SQL database
"""
result = generate_terraform(prompt)
print(result)
```
### Advanced Usage
```python
# Cloud-specific prompts
aws_prompt = "Create a Terraform configuration for AWS EKS cluster with managed node groups"
azure_prompt = "Create a Terraform configuration for Azure Kubernetes Service (AKS)"
gcp_prompt = "Create a Terraform configuration for GCP Cloud Run service"
# Generate configurations
aws_config = generate_terraform(aws_prompt)
azure_config = generate_terraform(azure_prompt)
gcp_config = generate_terraform(gcp_prompt)
```
## Training Details
### Training Data
**Stage 1**: Public Terraform Registry documentation
**Stage 2**: Additional training on:
- **AWS Documentation**: EC2, S3, RDS, Lambda, VPC, IAM, etc.
- **Azure Documentation**: Virtual Machines, Storage Accounts, App Services, Key Vault, etc.
- **GCP Documentation**: Compute Engine, Cloud Storage, Cloud SQL, GKE, etc.
### Training Procedure
- **Method**: QLoRA (4-bit quantization + LoRA)
- **Two-Stage Training**:
1. Terraform Registry documentation
2. Cloud provider documentation (AWS, Azure, GCP)
- **LoRA Rank**: 64
- **LoRA Alpha**: 16
- **Target Modules**: q_proj, v_proj
- **Training Epochs**: 3 (Stage 1) + Additional training (Stage 2)
- **Max Sequence Length**: 512
- **Quantization**: 4-bit (fp4)
### Training Hyperparameters
- **Training regime**: 4-bit mixed precision
- **LoRA Dropout**: 0.0
- **Learning Rate**: Optimized for QLoRA training
- **Batch Size**: Optimized for memory efficiency
## Performance Comparison
| Model | Terraform Knowledge | AWS Support | Azure Support | GCP Support | Multi-Cloud Capability |
|-------|-------------------|-------------|---------------|-------------|-------------------|
| terraform-codellama-7b | Excellent | Limited | Limited | Limited | Basic |
| **terraform-cloud-codellama-7b** | Excellent | Excellent | Excellent | Excellent | Advanced |
## Limitations and Bias
### Known Limitations
- **Context Length**: Limited to 512 tokens due to training configuration
- **Domain Specificity**: Optimized for Terraform and cloud infrastructure
- **Base Model Limitations**: Inherits limitations from CodeLlama-7b-Instruct-hf
- **Cloud Provider Updates**: May not include the latest cloud provider features
### Recommendations
- Use for Terraform and cloud infrastructure tasks
- Validate generated configurations before deployment
- Consider the 512-token context limit for complex configurations
- For production use, always review and test generated code
- Stay updated with cloud provider documentation for latest features
## Environmental Impact
- **Training Method**: QLoRA reduces computational requirements significantly
- **Hardware**: Trained using efficient 4-bit quantization
- **Carbon Footprint**: Reduced compared to full fine-tuning due to QLoRA efficiency
- **Two-Stage Approach**: Efficient incremental training
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{terraform-cloud-codellama-7b,
title={terraform-cloud-codellama-7b: A Multi-Cloud LoRA Fine-tuned Model for Terraform Code Generation},
author={Rafi Al Attrach and Patrick Schmitt and Nan Wu and Helena Schneider and Stefania Saju},
year={2024},
url={https://huggingface.co/rafiaa/terraform-cloud-codellama-7b}
}
```
## Related Models
- **Base Model**: [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf)
- **Stage 1 Model**: [rafiaa/terraform-codellama-7b](https://huggingface.co/rafiaa/terraform-codellama-7b)
- **This Model**: [rafiaa/terraform-cloud-codellama-7b](https://huggingface.co/rafiaa/terraform-cloud-codellama-7b) (Recommended)
## Model Card Contact
- **Author**: rafiaa
- **Model Repository**: [HuggingFace Model](https://huggingface.co/rafiaa/terraform-cloud-codellama-7b)
- **Issues**: Please report issues through the HuggingFace model page
## Acknowledgments
- **Research Project**: Early 2024 research project at TUM + IBM
- **Training Data**: Public documentation from Terraform Registry, AWS, Azure, and GCP
- **Base Model**: Meta's CodeLlama-7b-Instruct-hf
- **Training Method**: QLoRA for efficient fine-tuning
---
*This model represents the culmination of a two-stage fine-tuning approach, combining Terraform expertise with comprehensive cloud provider knowledge for superior infrastructure-as-code generation.* |