- Sentinel Satellite Image Classification Project¶ -
-- Project Overview¶ -
-- This project focuses on the development and deployment of a - machine learning application for satellite image classification. - The goal is to automate the classification of satellite images - into predefined categories that represent different types of - land cover. -
-- Motivation¶ -
-- End Users¶ -
-- The end users of this project are environmental scientists and - urban planners. -
-- Goal of End Users¶ -
-- Their goal is to utilize automated tools to classify large - volumes of satellite imagery quickly and accurately for - environmental monitoring and urban planning purposes. -
-- Obstacle to be Solved¶ -
-- The main obstacles include the high variability and similarity - between different land cover types in satellite images and the - volume of data that requires processing. -
-import tensorflow as tf
+
+
+
+
+
+
+
+
+
+
+
+Sentinel Satellite Image Classification Project¶
+
+
+
+
+
+
+
+
+
+
+Project Overview¶
This project focuses on the development and deployment of a machine learning application for satellite image classification. The goal is to automate the classification of satellite images into predefined categories that represent different types of land cover.
+Motivation¶
End Users¶
The end users of this project are environmental scientists and urban planners.
+Goal of End Users¶
Their goal is to utilize automated tools to classify large volumes of satellite imagery quickly and accurately for environmental monitoring and urban planning purposes.
+Obstacle to be Solved¶
The main obstacles include the high variability and similarity between different land cover types in satellite images and the volume of data that requires processing.
+
+
+
+
+
+
+
+
+In [ ]:
+
+
+import tensorflow as tf
tf.__version__
+
+
+
+
+
+
+
+
+
+
+Out[ ]:
+
+'2.16.1'
+
+
+
+
+
+
+
+
+
+In [ ]:
+
+
+print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
+
+
+
+
+
+
+
+
+
+
+
+
+Num GPUs Available: 1
-
-
-
-
-
-
-
-
-
- Out[ ]:
-
- '2.16.1'
-
-
-
-
- print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
-
- -Num GPUs Available: 1 --
- Data Collection and Augmentation¶ -
-- Images Collected¶ -
-- The dataset used in this project is the EuroSAT collection, - which consists of 30,988 satellite images derived from Sentinel - satellites. These images are categorized into ten classes - representing different types of land cover: AnnualCrop, Forest, - HerbaceousVegetation, Highway, Industrial, Pasture, - PermanentCrop, Residential, River, SeaLake. -
-- Description of Splitting Images into Classes/Labeling Images¶ -
-- The EuroSAT images come pre-labeled, which facilitates the - classification task. The dataset was split into a training set - comprising 80% of the images and a validation set comprising - 20%, ensuring a comprehensive evaluation of the model across - varied image data. -
-import numpy as np
+Data Collection and Augmentation¶
Images Collected¶
The dataset used in this project is the EuroSAT collection, which consists of 30,988 satellite images derived from Sentinel satellites. These images are categorized into ten classes representing different types of land cover: AnnualCrop, Forest, HerbaceousVegetation, Highway, Industrial, Pasture, PermanentCrop, Residential, River, SeaLake.
+Description of Splitting Images into Classes/Labeling Images¶
The EuroSAT images come pre-labeled, which facilitates the classification task. The dataset was split into a training set comprising 80% of the images and a validation set comprising 20%, ensuring a comprehensive evaluation of the model across varied image data.
+import numpy as np
import keras
from keras import layers
import matplotlib.pyplot as plt
from tensorflow.keras.preprocessing.image import ImageDataGenerator
-
- def load_data():
+def load_data():
train_ds = tf.keras.utils.image_dataset_from_directory(
'data',
validation_split=0.2,
@@ -7961,51 +7655,37 @@ Num GPUs Available: 1
class_names = train_ds.class_names
print(class_names)
-
- -Found 30988 files belonging to 10 classes. +
Found 30988 files belonging to 10 classes. Using 24791 files for training. Found 30988 files belonging to 10 classes. Using 6197 files for validation. ['AnnualCrop', 'Forest', 'HerbaceousVegetation', 'Highway', 'Industrial', 'Pasture', 'PermanentCrop', 'Residential', 'River', 'SeaLake'] --
import matplotlib.pyplot as plt
+
+import matplotlib.pyplot as plt
for images, labels in train_ds.take(1):
plt.figure(figsize=(6, 6))
@@ -8016,92 +7696,61 @@ Using 6197 files for validation.
print("Sample pixel values (0 to 1 range):", images[0].numpy().flatten()[0:5])
print("Min and max pixel values:", images[0].numpy().min(), images[0].numpy().max())
-
- -Sample pixel values (0 to 1 range): [180. 183. 156. 177. 186.] +
Sample pixel values (0 to 1 range): [180. 183. 156. 177. 186.] Min and max pixel values: 74.0 248.0 --
-2024-05-05 01:03:10.842952: W tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence --
- val_batches = tf.data.experimental.cardinality(val_ds)
+2024-05-05 01:03:10.842952: W tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence ++
+val_batches = tf.data.experimental.cardinality(val_ds)
test_ds = val_ds.take(val_batches // 5)
validation_ds = val_ds.skip(val_batches // 5)
@@ -8109,49 +7758,35 @@ Min and max pixel values: 74.0 248.0
print('Number of training batches:', tf.data.experimental.cardinality(train_ds).numpy())
print('Number of validation batches:', tf.data.experimental.cardinality(validation_ds).numpy())
print('Number of test batches:', tf.data.experimental.cardinality(test_ds).numpy())
-
- -Number of training batches: 775 +
Number of training batches: 775 Number of validation batches: 156 Number of test batches: 38 --
import matplotlib.pyplot as plt
+
+import matplotlib.pyplot as plt
import numpy as np
plt.figure(figsize=(10, 10))
@@ -8162,110 +7797,64 @@ Number of test batches: 38
class_index = np.argmax(labels[i])
plt.title(class_names[class_index])
plt.axis("off")
+2024-05-05 01:03:10.923071: W tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence-
-2024-05-05 01:03:10.923071: W tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence --
number_of_classes = len(train_ds.class_names)
-
- - Data Augmentation Description¶ -
-- To enhance the robustness of the model against variations in - real-world satellite images, several data augmentation - techniques were applied. These included random flips (both - horizontal and vertical), random rotations (up to 20 degrees), - random zoom (up to 20%), and random contrast adjustments. These - techniques help simulate different capture conditions and - photographic variations, aiding the model in learning more - generalized features. -
-import numpy as np
+number_of_classes = len(train_ds.class_names)
+Data Augmentation Description¶
To enhance the robustness of the model against variations in real-world satellite images, several data augmentation techniques were applied. These included random flips (both horizontal and vertical), random rotations (up to 20 degrees), random zoom (up to 20%), and random contrast adjustments. These techniques help simulate different capture conditions and photographic variations, aiding the model in learning more generalized features.
+import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
@@ -8280,27 +7869,20 @@ Number of test batches: 38
def augment_data(dataset):
return dataset.map(lambda x, y: (data_augmentation(x, training=True), y))
-
- import numpy as np
+import numpy as np
import matplotlib.pyplot as plt
for images, labels in train_ds.take(1):
@@ -8315,149 +7897,80 @@ Number of test batches: 38
plt.imshow(augmented_image[0].numpy().astype("uint8"))
plt.title(class_name)
plt.axis("off")
+2024-05-05 01:03:11.358116: W tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence-
-2024-05-05 01:03:11.358116: W tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence --
dataset_length = tf.data.experimental.cardinality(train_ds).numpy()
+dataset_length = tf.data.experimental.cardinality(train_ds).numpy()
print("Length of the TensorFlow dataset:", dataset_length)
+Length of the TensorFlow dataset: 775-
-Length of the TensorFlow dataset: 775 --
- Model Training¶ -
-- Initial Training and Fine Tuning¶ -
-- The model's initial training utilized a pre-trained - EfficientNetB0 architecture with the top layers tailored for our - classification needs. The base model's layers were initially - frozen. Fine-tuning was later applied by unfreezing all layers - and continuing training, which refined the model's ability to - classify complex images more accurately. -
-- Comparison of Performance¶ -
-- Initially, the model achieved a validation accuracy of around - 92%. Post fine-tuning, this accuracy improved to approximately - 94%. This indicates the effectiveness of fine-tuning in - enhancing the model's capability to distinguish subtle features - in satellite images. -
-from tensorflow.keras.applications import EfficientNetB0
+Model Training¶
Initial Training and Fine Tuning¶
The model's initial training utilized a pre-trained EfficientNetB0 architecture with the top layers tailored for our classification needs. The base model's layers were initially frozen. Fine-tuning was later applied by unfreezing all layers and continuing training, which refined the model's ability to classify complex images more accurately.
+Comparison of Performance¶
Initially, the model achieved a validation accuracy of around 92%. Post fine-tuning, this accuracy improved to approximately 94%. This indicates the effectiveness of fine-tuning in enhancing the model's capability to distinguish subtle features in satellite images.
+from tensorflow.keras.applications import EfficientNetB0
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau
import tensorflow as tf
@@ -8497,6 +8010,8 @@ Length of the TensorFlow dataset: 775
return history_fine
+train_ds = augment_data(train_ds)
+
model = create_efficientnet_model((64, 64, 3), len(class_names))
model.compile(optimizer='adam',
@@ -8508,47 +8023,34 @@ Length of the TensorFlow dataset: 775
keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=10, min_lr=0.00001),
keras.callbacks.EarlyStopping(monitor='val_loss', patience=20, restore_best_weights=True)
]
-
- initial_epochs = 10
+initial_epochs = 10
history = model.fit(train_ds, validation_data=validation_ds, epochs=initial_epochs, callbacks=callbacks)
-
- Epoch 1/10 +
Epoch 1/10 775/775 ━━━━━━━━━━━━━━━━━━━━ 56s 62ms/step - accuracy: 0.8013 - loss: 0.6025 - val_accuracy: 0.9022 - val_loss: 0.3121 - learning_rate: 0.0010 Epoch 2/10 775/775 ━━━━━━━━━━━━━━━━━━━━ 38s 49ms/step - accuracy: 0.8930 - loss: 0.3150 - val_accuracy: 0.9083 - val_loss: 0.2748 - learning_rate: 0.0010 @@ -8569,45 +8071,33 @@ Epoch 9/10 Epoch 10/10 775/775 ━━━━━━━━━━━━━━━━━━━━ 39s 50ms/step - accuracy: 0.9485 - loss: 0.1468 - val_accuracy: 0.9171 - val_loss: 0.2794 - learning_rate: 0.0010-
epochs = 10
+epochs = 10
history_fine = fine_tune_model(model, train_ds, validation_ds, epochs)
-
- Epoch 1/10 +
Epoch 1/10 775/775 ━━━━━━━━━━━━━━━━━━━━ 55s 62ms/step - accuracy: 0.9292 - loss: 0.2041 - val_accuracy: 0.9219 - val_loss: 0.2278 - learning_rate: 1.0000e-05 Epoch 2/10 775/775 ━━━━━━━━━━━━━━━━━━━━ 40s 51ms/step - accuracy: 0.9348 - loss: 0.1898 - val_accuracy: 0.9249 - val_loss: 0.2211 - learning_rate: 1.0000e-05 @@ -8628,25 +8118,19 @@ Epoch 9/10 Epoch 10/10 775/775 ━━━━━━━━━━━━━━━━━━━━ 38s 49ms/step - accuracy: 0.9461 - loss: 0.1608 - val_accuracy: 0.9327 - val_loss: 0.2127 - learning_rate: 1.0000e-05-
acc = history.history['accuracy']
+acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
@@ -8670,44 +8154,32 @@ Epoch 10/10
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.show()
-
- acc += history_fine.history['accuracy']
+acc += history_fine.history['accuracy']
val_acc += history_fine.history['val_accuracy']
loss += history_fine.history['loss']
@@ -8732,141 +8204,87 @@ Epoch 10/10
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.show()
-
- print("Test dataset evaluation")
+print("Test dataset evaluation")
model.evaluate(test_ds)
-
- Test dataset evaluation +
Test dataset evaluation 38/38 ━━━━━━━━━━━━━━━━━━━━ 1s 34ms/step - accuracy: 0.9317 - loss: 0.2277-
[0.19601286947727203, 0.9358552694320679]-
print(model.summary())
-
- Model: "functional_15"
+[0.19601286947727203, 0.9358552694320679]+
print(model.summary())
+Model: "functional_15"
- ┏━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┓ +
┏━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ Connected to ┃ ┡━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━┩ │ input_layer_11 │ (None, 64, 64, 3) │ 0 │ - │ @@ -9595,165 +9013,89 @@ Epoch 10/10 │ dense_17 (Dense) │ (None, 10) │ 2,570 │ dropout_4[0][0] │ └─────────────────────┴───────────────────┴────────────┴───────────────────┘-
Total params: 6,418,883 (24.49 MB) +
Total params: 6,418,883 (24.49 MB)-
Trainable params: 789,770 (3.01 MB) +
Trainable params: 789,770 (3.01 MB)-
Non-trainable params: 4,049,571 (15.45 MB) +
Non-trainable params: 4,049,571 (15.45 MB)-
Optimizer params: 1,579,542 (6.03 MB) +
Optimizer params: 1,579,542 (6.03 MB)-
-None --
print("Test dataset evaluation")
-model.evaluate(test_ds)
+None-
Test dataset evaluation +
print("Test dataset evaluation")
+model.evaluate(test_ds)
+Test dataset evaluation 38/38 ━━━━━━━━━━━━━━━━━━━━ 1s 33ms/step - accuracy: 0.9380 - loss: 0.2024-
[0.20488472282886505, 0.9358552694320679]-
import numpy as np
+[0.20488472282886505, 0.9358552694320679]+
import numpy as np
import tensorflow as tf
y_true = []
@@ -9770,25 +9112,19 @@ None
y_true.extend(labels)
y_pred.extend(predicted_labels)
-
- 1/1 ━━━━━━━━━━━━━━━━━━━━ 5s 5s/step +
1/1 ━━━━━━━━━━━━━━━━━━━━ 5s 5s/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 36ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 27ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 84ms/step @@ -9827,38 +9163,26 @@ None 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 26ms/step 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 27ms/step-
-2024-05-05 01:16:54.328388: W tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence --
from sklearn.metrics import confusion_matrix
+2024-05-05 01:16:54.328388: W tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence ++
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
import seaborn as sns
@@ -9870,65 +9194,46 @@ None
plt.ylabel('True Labels')
plt.title('Confusion Matrix')
plt.show()
-
- model.save('sentinel_classificatiion_model_generated.keras')
-
- import matplotlib.pyplot as plt
+model.save('sentinel_classificatiion_model_generated.keras')
+import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
@@ -9947,118 +9252,52 @@ None
predictions = model.predict(images)
plot_images(images, labels, predictions)
plt.show()
+1/1 ━━━━━━━━━━━━━━━━━━━━ 8s 8s/step-
1/1 ━━━━━━━━━━━━━━━━━━━━ 8s 8s/step +
2024-05-04 23:42:16.121259: W tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence-
-2024-05-04 23:42:16.121259: W tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence --
- Model Application¶ -
-- Deployment¶ -
-- The model was deployed using a Gradio web interface, which - provides a user-friendly GUI for uploading images and receiving - instant classifications. -
-Demo¶
-- A live demo of the application can be accessed at: - https://huggingface.co/spaces/Lars2000/sentinel -
-- Results of User Validation¶ -
-- User feedback highlighted the application's ease of use and - accuracy. Positive points included quick response times and - informative confidence scores for different classifications. - Suggestions for improvement were focused on enhancing - performance with low-contrast images and those affected by cloud - cover. -
-- Conclusion¶ -
-- The project successfully demonstrated the application of - convolutional neural networks in classifying satellite imagery, - utilizing both transfer learning and fine-tuning approaches to - achieve high accuracy. Future improvements could address the - challenges identified through user feedback, potentially - involving the incorporation of additional data preprocessing - steps or advanced neural network architectures. -
-Model Application¶
Deployment¶
The model was deployed using a Gradio web interface, which provides a user-friendly GUI for uploading images and receiving instant classifications.
+Demo¶
A live demo of the application can be accessed at: https://huggingface.co/spaces/Lars2000/sentinel
+Results of User Validation¶
User feedback highlighted the application's ease of use and accuracy. Positive points included quick response times and informative confidence scores for different classifications. Suggestions for improvement were focused on enhancing performance with low-contrast images and those affected by cloud cover.
+Conclusion¶
The project successfully demonstrated the application of convolutional neural networks in classifying satellite imagery, utilizing both transfer learning and fine-tuning approaches to achieve high accuracy. Future improvements could address the challenges identified through user feedback, potentially involving the incorporation of additional data preprocessing steps or advanced neural network architectures.
+