Mazingira 254 - Environmental Insight Analyzer Model Card
Model Description
Mazingira 254 is a deep learning image classification model designed to analyze and classify images related to environmental psychology and sustainable design. The model is trained to identify key visual concepts that are important to urban planners, architects, environmental scientists, and students in the field.
It takes an image as input and predicts its relevance to one of the predefined environmental design categories, helping to automate the analysis of visual data for research, planning, and educational purposes.
Model Details
- Model Type: Convolutional Neural Network (CNN) for Image Classification
- Base Architecture: Fine-tuned from a pre-trained model (e.g., EfficientNetV2B0 or MobileNetV2)
- Training Data: The model was trained on a custom dataset of images representing the classes listed below.
- Framework: TensorFlow / Keras
- Input:
(224, 224, 3)color image - Output: A probability distribution over the 9 classification categories.
Classification Categories
The model classifies images into one of the following 9 categories:
- Climate Resilient Housing
- Environmental Education
- Green Urban Planning
- Pollution Mitigation Measures
- Proxemics and Sustainable Behavior
- Restorative Environments
- Sustainable Architecture
- Territoriality and Adaptation
- Waste Management Systems
How to Use the Model
To use this model, you will need Python and TensorFlow installed.
pip install tensorflow numpy
Here is a Python code snippet demonstrating how to load the model and make a prediction.
import tensorflow as tf
import numpy as np
import json
from tensorflow.keras.preprocessing import image
# --- 1. Configuration ---
MODEL_PATH = 'MAZINGIRA254.keras'
LABELS_PATH = 'm_class_names.json'
IMAGE_TO_PREDICT = 'path/to/your/image.jpg' # <--- CHANGE THIS
# --- 2. Load Model and Class Names ---
try:
print("Loading model and labels...")
model = tf.keras.models.load_model(MODEL_PATH)
with open(LABELS_PATH, 'r') as f:
class_names = json.load(f)
print("โ
Resources loaded successfully.")
except Exception as e:
print(f"โ Error loading resources: {e}")
exit()
# --- 3. Prediction Function ---
def classify_image(img_path, model, class_names):
"""Loads, preprocesses, and classifies a single image."""
# Load and resize the image
img = image.load_img(img_path, target_size=(224, 224))
# Convert to array and add a batch dimension
img_array = image.img_to_array(img)
img_batch = np.expand_dims(img_array, axis=0)
# Get predictions
predictions = model.predict(img_batch)[0]
# Create a dictionary of results
confidences = {class_names[i]: float(predictions[i]) for i in range(len(class_names))}
# Sort by confidence
sorted_confidences = sorted(confidences.items(), key=lambda item: item[1], reverse=True)
return sorted_confidences
# --- 4. Classify and Print Results ---
if __name__ == "__main__":
print(f"\nClassifying image: {IMAGE_TO_PREDICT}")
results = classify_image(IMAGE_TO_PREDICT, model, class_names)
print("\n--- Top Predictions ---")
for i in range(min(3, len(results))):
class_name, confidence = results[i]
print(f"{i+1}. {class_name}: {confidence:.2%}")
Files in This Repository
MAZINGIRA254.keras: The main trained model file in the modern Keras v3 format. It contains the model architecture, weights, and optimizer state.m_class_name.json: A JSON file containing the list of class labels in the correct order for interpreting the model's output.Mazingira_254_Environmental_Psychology_training_history.png: A plot showing the training and validation accuracy/loss curves over epochs.README.md: This file.
Training & Performance
The model was trained using a two-phase transfer learning approach:
- Feature Extraction: The pre-trained base model's weights were frozen, and only the new classification head was trained for
10epochs. - Fine-Tuning: The top
40layers of the base model were unfrozen, and the entire model was trained for an additional20epochs with a very low learning rate (1e-5) to fine-tune the weights on the specific task.
For a detailed view of performance metrics, please see the MAZINGIRA254.png file.
Limitations & Bias
- Geographic & Cultural Bias: The model's performance is dependent on the diversity of its training data. It may perform better on images from geographic regions and cultures that were well-represented in the training set and less effectively on underrepresented ones.
- Scope Limitation: The model can only classify images into the 9 categories it was trained on. It cannot identify concepts outside of this scope.
- Subjectivity: Categories like "Restorative Environments" or "Sustainable Behavior" can be subjective. The model's understanding is based on the patterns and labels of its training data, which may not align with all human interpretations.
Attribution
If you use this model in your work, please provide attribution as required by the CC BY 4.0 license. You can use the following format:
Mazingira 254 - Environmental Insight Analyzer by [KABURA KURIA /ANON STUDIOS 254], licensed under CC BY 4.0.
- Downloads last month
- 2