🏑 Fine-Tuned BERT for Interior Design (Living Room)

This is a fine-tuned BERT model for interior design prompt validation and style classification. It was developed as part of my Final Year Project: Text-to-Image Interior Design Generator with Generative AI Assistance.

The model supports:

  • βœ… Prompt Validation – distinguish valid vs. invalid design prompts

  • 🏷️ Style Classification – classify valid prompts into 7 living room design styles:

    • Modern
    • Scandinavian
    • Rustic
    • Industrial
    • Traditional
    • Mid-Century Modern
    • Coastal

πŸ“Š Results

  • Prompt Validation (binary classification) β†’ F1-score: 1.00
  • Style Classification (7 classes) β†’ F1-score: 0.99

πŸ“‚ Model Details

  • Base model: bert-base-uncased

  • Library: Transformers (PyTorch)

  • Trained with: custom dataset of interior design prompts

  • Labels mapping:

    {
      "id2label": {
        "0": "Modern",
        "1": "Scandinavian",
        "2": "Rustic",
        "3": "Industrial",
        "4": "Traditional",
        "5": "Mid-Century Modern",
        "6": "Coastal"
      },
      "label2id": {
        "Modern": 0,
        "Scandinavian": 1,
        "Rustic": 2,
        "Industrial": 3,
        "Traditional": 4,
        "Mid-Century Modern": 5,
        "Coastal": 6
      }
    }
    

βš™οΈ How to Use

from transformers import BertTokenizer, BertForSequenceClassification
import torch

# Load model from Hugging Face Hub
tokenizer = BertTokenizer.from_pretrained("aimhkimi74/Bert-Model-living-room")
model = BertForSequenceClassification.from_pretrained("aimhkimi74/Bert-Model-living-room")

# Example prompt
text = "A modern living room with a gray sofa and wooden floor."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
pred = torch.argmax(outputs.logits, dim=1)

# Map prediction to style
label_map = {
    0: "Modern",
    1: "Scandinavian",
    2: "Rustic",
    3: "Industrial",
    4: "Traditional",
    5: "Mid-Century Modern",
    6: "Coastal"
}
print("Predicted Style:", label_map[pred.item()])

πŸ“– Training Procedure

  • Optimizer: AdamW
  • Learning Rate: 5e-5
  • Epochs: 4
  • Batch size: 32
  • Evaluation metrics: Accuracy, F1-score

⚠️ Intended Use & Limitations

  • Intended use: Assist a text-to-image system by validating prompts and tagging living-room styles.
  • Not intended for: Architectural safety decisions, non-living-room styles, or multilingual inputs (English only).
  • Known limits: Performance may drop outside the 7 styles or with very short/ambiguous prompts.

πŸ“œ License

This model is released under the MIT License.

πŸ“š References

  • Devlin, J. et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
  • Hugging Face Transformers library
  • Rombach, R. et al. High-Resolution Image Synthesis with Latent Diffusion Models
Downloads last month
-
Safetensors
Model size
0.1B params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for aimhkimi74/Bert-Model-living-room

Finetuned
(6576)
this model