English
Chinese

Architecture Feedback Generator (Classification & Prompt Only)

This Gradio application performs image and text classification related to architectural design and generates a structured prompt based on the classification results. This version focuses on the classification and prompt generation steps, excluding the interaction with a Large Language Model for generating feedback.

Functionality

The application takes two inputs:

  1. Architectural Image: Upload an image representing an architectural design.
  2. Text Description or Question: Provide a text input related to the architectural design or a question about it.

Based on these inputs, the application performs:

  • Image Classification: Classifies the image into different architectural design stages (e.g., brainstorm, design iteration, final review).
  • Text Classification: Determines if the text input contains abstract architectural concepts (High Concept: Yes/No) and provides a confidence score.

The results of both classifications are then used to generate a structured prompt, intended for use with a Large Language Model (LLM).

How to Use

  1. Upload an architectural image using the image input box.
  2. Enter your text description or question in the text input box.
  3. Click the "Perform Classification & Generate Prompt" button.
  4. The application will display the Image Classification Results, Text Classification Results, and the Generated Prompt for LLM.

Models Used

  • Image Classification Model: A CNN model hosted on Hugging Face Hub (keerthikoganti/architecture-design-stages-compact-cnn).
  • Text Embedding Model: sentence-transformers/all-MiniLM-L6-v2 from Hugging Face.
  • Text Classification Model: An AutoGluon TabularPredictor model hosted on Hugging Face Hub (kaitongg/my-autogluon-model), trained on text embeddings.

Deployment

This application is designed to be deployed on Hugging Face Spaces.

  • app.py: Contains the complete Gradio application code, including model loading, function definitions, and the Gradio interface.
  • requirements.txt: Lists the necessary Python packages to install for the Space environment.
  • Models: The models are loaded directly from Hugging Face Hub within the app.py file.

To deploy this application:

  1. Create a new Hugging Face Space.
  2. Choose the "Gradio" application template.
  3. Upload the generated app.py and requirements.txt files to your Space.
  4. Ensure any necessary secrets (like HF_TOKEN_WRITE if your text predictor repo is private) are added to your Space settings as environment variables.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Datasets used to train kaitongg/architutor-gradio