Architecture Feedback Generator (Classification & Prompt Only)
This Gradio application performs image and text classification related to architectural design and generates a structured prompt based on the classification results. This version focuses on the classification and prompt generation steps, excluding the interaction with a Large Language Model for generating feedback.
Functionality
The application takes two inputs:
- Architectural Image: Upload an image representing an architectural design.
- Text Description or Question: Provide a text input related to the architectural design or a question about it.
Based on these inputs, the application performs:
- Image Classification: Classifies the image into different architectural design stages (e.g., brainstorm, design iteration, final review).
- Text Classification: Determines if the text input contains abstract architectural concepts (High Concept: Yes/No) and provides a confidence score.
The results of both classifications are then used to generate a structured prompt, intended for use with a Large Language Model (LLM).
How to Use
- Upload an architectural image using the image input box.
- Enter your text description or question in the text input box.
- Click the "Perform Classification & Generate Prompt" button.
- The application will display the Image Classification Results, Text Classification Results, and the Generated Prompt for LLM.
Models Used
- Image Classification Model: A CNN model hosted on Hugging Face Hub (
keerthikoganti/architecture-design-stages-compact-cnn). - Text Embedding Model:
sentence-transformers/all-MiniLM-L6-v2from Hugging Face. - Text Classification Model: An AutoGluon TabularPredictor model hosted on Hugging Face Hub (
kaitongg/my-autogluon-model), trained on text embeddings.
Deployment
This application is designed to be deployed on Hugging Face Spaces.
app.py: Contains the complete Gradio application code, including model loading, function definitions, and the Gradio interface.requirements.txt: Lists the necessary Python packages to install for the Space environment.- Models: The models are loaded directly from Hugging Face Hub within the
app.pyfile.
To deploy this application:
- Create a new Hugging Face Space.
- Choose the "Gradio" application template.
- Upload the generated
app.pyandrequirements.txtfiles to your Space. - Ensure any necessary secrets (like
HF_TOKEN_WRITEif your text predictor repo is private) are added to your Space settings as environment variables.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support