Welcome (or welcome back!) to the AI for social good series! In the second part, of this two-part series of articles, we will look at how artificial intelligence (AI) coupled with the power of open-source tools and techniques like deep learning can help us further the quest for finding extra-terrestrial intelligence!

In the first part of this two-part series, we formulated our key objective and motivation behind doing this project. Briefly, we were looking at different radio-telescope signals simulated from SETI (Search for Extra-terrestrial Intelligence) Institute data. We leveraged techniques to process, analyze and visualize radio signals as spectrograms, which are basically visual representations of the raw signal.

Signal observed at the Allen Telescope Array from the Cassini satellite while orbiting Saturn on September 3, 2014 (Source: https://medium.com/@gadamc/using-artificial-intelligence-to-search-for-extraterrestrial-intelligence-ec19169e01af)


The key focus for us in this article will be to try and build a robust radio signal classifier using deep learning for a total of seven different types of signals!

Loading SETI Signal Data

Like we discussed in the previous article, the simulated SETI radio-signal dataset is available in Kaggle. Remember, the processed dataset is available in the primary_small folder. After unzipping its contents this is how the directory structure looks like.


We have a total of 7 different signal classes to classify, each class has a total of 800 samples for training, 100 for validation and test respectively. Considering noise has been added to the simulated signal data coupled with the fact that we have less number of samples per class makes this a tough problem to solve! Before we get started, let’s load the necessary dependencies we will be using for building models. We will leverage the tf.keras API from TensorFlow here.

import os
import math
import numpy as np
import json
from tensorflow import keras
from tensorflow.keras.metrics import categorical_accuracy
import tensorflow as tf
os.environ['TF_ENABLE_AUTO_MIXED_PRECISION'] = '1'
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'
BASE_DIR = './data'
TRAIN_DIR = os.path.join(BASE_DIR, 'train')
VAL_DIR = os.path.join(BASE_DIR, 'valid')
TEST_DIR = os.path.join(BASE_DIR, 'test')
print(TRAIN_DIR)
print(VAL_DIR)
print(TEST_DIR)
view raw seti13.py hosted with ❤ by GitHub


./data/train
./data/valid
./data/test

Visualize Sample SETI Signals

Just to recap from our last article and take a peek at the different types of signals we are dealing with, we can visualize their spectrograms using the following code.

from keras.preprocessing.image import img_to_array, load_img
import glob
import matplotlib.pyplot as plt
%matplotlib inline
data_signal_files = glob.glob(TRAIN_DIR+'/*/*.png')
sample_files = [data_signal_files[idx] for idx in range(0, 5600, 800+15)]
fix, ax = plt.subplots(2, 4, figsize=(14, 6))
for idx, img in enumerate(sample_files):
id1 = 1 if idx > 3 else 0
id2 = idx % 4
img_arr = img_to_array(load_img(img))
f = ax[id1, id2].imshow(img_arr / 255., aspect='auto')
t = ax[id1, id2].set_title(img.split('/')[-1].split('.')[0].split('_')[-1], fontsize=10)
ax[1,3].set_axis_off()
view raw seti10.py hosted with ❤ by GitHub


Things look to be in order with regard to the different signal samples we are dealing with!

Data Generators and Image Augmentation

Since we have a low number of training samples per class, one strategy to get more data is to generate new data using image augmentation. The idea behind image augmentation is exactly as the name sounds. We load in existing images from our training dataset and apply some image transformation operations to them, such as rotation, shearing, translation, zooming, flipping and so on, to produce new, altered versions of existing images.

Due to these random transformations, we don’t get the same images each time. We need to be careful with augmentation operations though based on the problem we are solving so that we don’t end up distorting the source images too much.

We will be applying some basic transformations to all our training data but keep our validation and test dataset as it is except just scaling the data. Let’s build our data generators now.

# instantiating the image data generator objects
train_datagen = keras.preprocessing.image.ImageDataGenerator(rescale=1./255.,
zoom_range=0.05,
rotation_range=180,
vertical_flip=True,
horizontal_flip=True,
fill_mode='reflect')
valid_datagen = keras.preprocessing.image.ImageDataGenerator(rescale=1./255.)
test_datagen = keras.preprocessing.image.ImageDataGenerator(rescale=1./255.)
# building the image data generators
TRAIN_BATCH_SIZE = 64
IMG_DIMS = (192, 192)
train_generator = train_datagen.flow_from_directory(directory=TRAIN_DIR,
classes=['brightpixel', 'narrowband',
'narrowbanddrd', 'noise',
'squarepulsednarrowband', 'squiggle',
'squigglesquarepulsednarrowband'],
target_size=IMG_DIMS,
batch_size=TRAIN_BATCH_SIZE,
class_mode='categorical',
interpolation='bicubic',
shuffle=True, seed=42)
# Output
Out [5]: Found 5600 images belonging to 7 classes.
VAL_BATCH_SIZE = 64
val_generator = valid_datagen.flow_from_directory(directory=VAL_DIR,
classes=['brightpixel', 'narrowband',
'narrowbanddrd', 'noise',
'squarepulsednarrowband', 'squiggle',
'squigglesquarepulsednarrowband'],
target_size=IMG_DIMS,
batch_size=VAL_BATCH_SIZE,
class_mode='categorical',
interpolation='bicubic',
shuffle=False, seed=42)
# Output
Out [6]: Found 700 images belonging to 7 classes.
view raw seti11.py hosted with ❤ by GitHub

We can now build a sample data generator just to get an idea of how the data generator coupled with image augmentation works.

# building sample data generator
sample_generator = train_datagen.flow_from_directory(directory=TRAIN_DIR,
classes=['brightpixel', 'narrowband',
'narrowbanddrd', 'noise',
'squarepulsednarrowband', 'squiggle',
'squigglesquarepulsednarrowband'],
target_size=IMG_DIMS,
batch_size=1,
class_mode='categorical',
interpolation='bicubic',
shuffle=True, seed=42)
sample = [next(sample_generator) for i in range(0,5)]
# viewing sample signal data
fig, ax = plt.subplots(1,5, figsize=(16, 6))
print('Labels:', [list(item[1][0]) for item in sample])
l = [ax[i].imshow(sample[i][0][0]) for i in range(0,5)]
view raw seti12.py hosted with ❤ by GitHub


The labels are one-hot encoded given that we have a total of 7 classes so each label is a one-hot encoded vector of size 7.

Deep Transfer Learning with CNNs

The idea of transfer learning is not a new concept and is something which is very useful when we are working with less data. Given that we have a pre-trained model, which was trained previously on a lot of data, we can use this model on a new problem with less data and should ideally get a model which performs better and converges faster.


There are a wide variety of pre-trained CNN models which have been trained on the ImageNet dataset having a lot of images belonging to a total of 1000 classes. The idea is that these models should act as effective feature extractors for images and can also be fine-tuned to the specific task we perform. Pre-trained models can be frozen completely where we don’t change the layer weights at all when we train on the new dataset or we can fine-tune (partially or completely) the model weights as we train on the new dataset.


In our scenario, we will try out partial and complete fine-tuning of our pre-trained models.

Pre-trained CNN Models

One of the fundamental requirements for transfer learning is the presence of models that perform well on source tasks. Luckily, the deep learning world believes in sharing. Many of the state-of-the art deep learning architectures have been openly shared by their respective teams. Pre-trained models are usually shared in the form of the millions of parameters/weights the model achieved while being trained to a stable state. Pre-trained models are available for everyone to use through different means.

Pre-trained models are available in TensorFlow which you can access easily using its API. We will be showing how to do that in this article. You can also access pre-trained models from the web since most of them have been open-sourced.

For computer vision, you can leverage some popular models including,

The models we will be using in our article will be VGG-19 and ResNet-50.

VGG-19 model

The VGG-19 model is a 19-layer (convolution and fully connected) deep learning network built on the ImageNet database, which is built for the purpose of image recognition and classification. This model was built by Karen Simonyan and Andrew Zisserman and is mentioned in their paper titled ‘Very Deep Convolutional Networks for Large-Scale Image Recognition’. I recommend all interested readers to go and read up on the excellent literature in this paper. The architecture of the VGG-19 model is depicted in the following figure.

ResNet-50 model

The ResNet-50 model is a 50-convolutional block (several layers in each block) deep learning network built on the ImageNet database. This model has over 175+ layers in total and is a very deep network. ResNet stands for Residual Networks. The following figure shows the typical architecture of ResNet-34.

In general, deep convolutional neural networks have led to major breakthroughs in image classification accuracy. However, as we go deeper; the training of neural network becomes difficult. The reason for this often is because of the vanishing gradient problem . Basially as the gradient is back-propagated to the shallower layers (closer to the input), repeated tensor operations make the gradient really small. Hence, the accuracy starts saturating and then degrades also. Residual Learning tries to solve these problems with residual blocks.


Leveraging skip connections, we can allow the network to learn the identity function (depicted in the above figure), which allows the network to pass the input through the residual block without passing through the other weight layers. This helps in tackling the problems of vanishing gradients and also keeping a focus on the high-level features which sometimes get lost with multiple levels of max-pooling.

Image via towardsdatascience.com


The ResNet-50 model which we will be using, consists of 5 stages, each with a Convolution and Identity block. Each convolution block has 3 convolution layers and each identity block also has 3 convolution layers.

Deep Transfer Learning with VGG-19

The focus here will be to take the pre-trained VGG-19 model and then perform both partial and complete fine-tuning of all the layers in the network. We will add the regular dense and output layers in the model for our downstream classification task.

Partial Fine-tuning

We will start our model training by taking the VGG-19 model and fine-tune the last two blocks of the model. The first task here is the build the model architecture and also specify which blocks \ layers we want to fine-tune.

INPUT_SHAPE = (192, 192, 3)
# load the pre-trained model
vgg = keras.applications.vgg19.VGG19(include_top=False, weights='imagenet',
input_shape=INPUT_SHAPE)
vgg.trainable = True
set_trainable = False
for layer in vgg.layers:
if layer.name in ['block5_conv1', 'block4_conv1']:
set_trainable = True
if set_trainable:
layer.trainable = True
else:
layer.trainable = False
# add custom dense and output layers
base_vgg = vgg
base_out = base_vgg.output
pool_out = keras.layers.Flatten()(base_out)
hidden1 = keras.layers.Dense(1024, activation='relu')(pool_out)
drop1 = keras.layers.Dropout(rate=0.2)(hidden1)
hidden2 = keras.layers.Dense(512, activation='relu')(drop1)
drop2 = keras.layers.Dropout(rate=0.2)(hidden2)
out = keras.layers.Dense(7, activation='softmax')(drop2)
model = keras.Model(inputs=base_vgg.input, outputs=out)
model.compile(optimizer=keras.optimizers.RMSprop(lr=1e-5),
loss='categorical_crossentropy',
metrics=[categorical_accuracy])
model.summary()
# Output
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) (None, 192, 192, 3) 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 192, 192, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 192, 192, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 96, 96, 64) 0
_________________________________________________________________
...
...
_________________________________________________________________
dense_5 (Dense) (None, 512) 524800
_________________________________________________________________
dropout_4 (Dropout) (None, 512) 0
_________________________________________________________________
dense_6 (Dense) (None, 7) 3591
=================================================================
Total params: 39,428,167
Trainable params: 37,102,599
Non-trainable params: 2,325,568
_________________________________________________________________
view raw seti14.py hosted with ❤ by GitHub


Now we will train the model for 100 epochs. I save the model after each epoch because I have a lot of space. I wouldn’t recommend to do this practically unless you have a lot of storage. You can always leverage a callback like ModelCheckpoint to focus on storing only the best model.

class EpochModelSaver(keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
self.model.save('vgg19_finetune_partial_seti_epoch_{}.h5'.format(epoch+1))
ms_epoch = EpochModelSaver()
csv_logger = keras.callbacks.CSVLogger('vgg19_finetune_partial_seti_log.csv', append=True, separator=',')
history1 = model.fit_generator(
train_generator,
steps_per_epoch=math.ceil(5600 / TRAIN_BATCH_SIZE),
epochs=100,
validation_data=val_generator,
validation_steps=math.ceil(700 / VAL_BATCH_SIZE),
callbacks=[ms_epoch, csv_logger], verbose=1,
)
model.save('vgg19_finetune_partial_seti.h5')
view raw seti15.py hosted with ❤ by GitHub

Epoch 1/100
88/88 [=======] - 109s 1s/step - loss: 1.6604 - categorical_accuracy: 0.3503 - val_loss: 1.1411 - val_categorical_accuracy: 0.5714
Epoch 2/100
88/88 [=======] - 89s 1s/step - loss: 1.1973 - categorical_accuracy: 0.5357 - val_loss: 1.0442 - val_categorical_accuracy: 0.5857
Epoch 3/100
88/88 [=======] - 94s 1s/step - loss: 0.9871 - categorical_accuracy: 0.6165 - val_loss: 0.8190 - val_categorical_accuracy: 0.6657
...
...
Epoch 98/100
88/88 [=======] - 93s 1s/step - loss: 0.4372 - categorical_accuracy: 0.8487 - val_loss: 0.4311 - val_categorical_accuracy: 0.8543
Epoch 99/100
88/88 [=======] - 94s 1s/step - loss: 0.4235 - categorical_accuracy: 0.8546 - val_loss: 0.3948 - val_categorical_accuracy: 0.8629
Epoch 100/100
88/88 [=======] - 95s 1s/step - loss: 0.4346 - categorical_accuracy: 0.8530 - val_loss: 0.4583 - val_categorical_accuracy: 0.8514
view raw setivgg19pt hosted with ❤ by GitHub


We can view the overall model learning curves using the following code snippet.

history = history1
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))
t = f.suptitle('Model Performance', fontsize=12)
f.subplots_adjust(top=0.85, wspace=0.3)
max_epoch = len(history.history['categorical_accuracy'])+1
epoch_list = list(range(1,max_epoch))
ax1.plot(epoch_list, history.history['categorical_accuracy'], label='Train Accuracy')
ax1.plot(epoch_list, history.history['val_categorical_accuracy'], label='Validation Accuracy')
tick_range = np.arange(0, max_epoch+1, 20)
tick_range[0] = 1
ax1.set_xticks(tick_range)
ax1.set_ylabel('Accuracy Value')
ax1.set_xlabel('Epoch')
ax1.set_title('Accuracy')
l1 = ax1.legend(loc="best")
ax2.plot(epoch_list, history.history['loss'], label='Train Loss')
ax2.plot(epoch_list, history.history['val_loss'], label='Validation Loss')
tick_range = np.arange(0, max_epoch+1, 20)
tick_range[0] = 1
ax2.set_xticks(tick_range)
ax2.set_ylabel('Loss Value')
ax2.set_xlabel('Epoch')
ax2.set_title('Loss')
l2 = ax2.legend(loc="best")
view raw seti16.py hosted with ❤ by GitHub


Looks decent but definitely a lot of fluctuations with regard to the loss and accuracy over time for the validation dataset.

Complete Fine-tuning

For our next training process, we will take the VGG-19 model and fine-tune all the blocks and add in our own dense and output layers.

INPUT_SHAPE = (192, 192, 3)
# load pre-trained model
vgg = keras.applications.vgg19.VGG19(include_top=False, weights='imagenet',
input_shape=INPUT_SHAPE)
# fine tune all layers
vgg.trainable = True
for layer in vgg.layers:
layer.trainable = True
# add custom dense layers and output layer
base_vgg = vgg
base_out = base_vgg.output
pool_out = keras.layers.Flatten()(base_out)
hidden1 = keras.layers.Dense(1024, activation='relu')(pool_out)
drop1 = keras.layers.Dropout(rate=0.2)(hidden1)
hidden2 = keras.layers.Dense(512, activation='relu')(drop1)
drop2 = keras.layers.Dropout(rate=0.2)(hidden2)
out = keras.layers.Dense(7, activation='softmax')(drop2)
model = keras.Model(inputs=base_vgg.input, outputs=out)
model.compile(optimizer=keras.optimizers.RMSprop(lr=1e-6),
loss='categorical_crossentropy',
metrics=[categorical_accuracy])
# train model
class EpochModelSaver(keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
self.model.save('vgg19_finetune_full_seti_epoch_{}.h5'.format(epoch+1))
ms_epoch = EpochModelSaver()
csv_logger = keras.callbacks.CSVLogger('vgg19_finetune_full_seti_log.csv', append=True, separator=',')
history2 = model.fit_generator(
train_generator,
steps_per_epoch=math.ceil(5600 / TRAIN_BATCH_SIZE),
epochs=100,
validation_data=val_generator,
validation_steps=math.ceil(700 / VAL_BATCH_SIZE),
callbacks=[ms_epoch, csv_logger], verbose=1,
)
model.save('vgg19_finetune_full_seti.h5')
view raw seti17.py hosted with ❤ by GitHub

Epoch 1/100
88/88 [=======] - 147s 2s/step - loss: 2.0585 - categorical_accuracy: 0.1617 - val_loss: 1.8726 - val_categorical_accuracy: 0.3229
Epoch 2/100
88/88 [=======] - 137s 2s/step - loss: 1.8822 - categorical_accuracy: 0.2427 - val_loss: 1.7319 - val_categorical_accuracy: 0.4186
Epoch 3/100
88/88 [=======] - 137s 2s/step - loss: 1.7656 - categorical_accuracy: 0.3272 - val_loss: 1.5815 - val_categorical_accuracy: 0.4771
...
...
Epoch 98/100
88/88 [=======] - 137s 2s/step - loss: 0.4483 - categorical_accuracy: 0.8461 - val_loss: 0.4451 - val_categorical_accuracy: 0.8443
Epoch 99/100
88/88 [=======] - 137s 2s/step - loss: 0.4414 - categorical_accuracy: 0.8496 - val_loss: 0.4357 - val_categorical_accuracy: 0.8429
Epoch 100/100
88/88 [=======] - 137s 2s/step - loss: 0.4513 - categorical_accuracy: 0.8438 - val_loss: 0.4228 - val_categorical_accuracy: 0.8514
view raw setivgg19ft hosted with ❤ by GitHub


The learning curves for the training process are depicted in the following figure.


Looks to be more stable as the epochs increase with regard to the validation accuracy and loss.

Deep Transfer Learning with ResNet-50

The focus in this section will be taking the pre-trained ResNet-50 model and then perform complete fine-tuning of all the layers in the network. We will add the regular dense and output layers as usual.

Complete Fine-tuning

For our training process, we will load the pre-trained ResNet-50 model and fine-tune the entire network for 500 epochs. Let’s start by building the model architecture.

INPUT_SHAPE = (192, 192, 3)
# load pre-trained resnet model
resnet = keras.applications.resnet50.ResNet50(include_top=False, weights='imagenet',
input_shape=INPUT_SHAPE)
# set all layers to be trainable
resnet.trainable = True
for layer in resnet.layers:
resnet.trainable = True
# add dense and output layers
base_resnet = resnet
base_out = base_resnet.output
pool_out = keras.layers.Flatten()(base_out)
hidden1 = keras.layers.Dense(1024, activation='relu')(pool_out)
drop1 = keras.layers.Dropout(rate=0.2)(hidden1)
hidden2 = keras.layers.Dense(512, activation='relu')(drop1)
drop2 = keras.layers.Dropout(rate=0.2)(hidden2)
out = keras.layers.Dense(7, activation='softmax')(drop2)
model = keras.Model(inputs=base_resnet.input, outputs=out)
model.compile(optimizer=keras.optimizers.RMSprop(lr=1e-6),
loss='categorical_crossentropy',
metrics=[categorical_accuracy])
model.summary()
# Output
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_3 (InputLayer) (None, 192, 192, 3) 0
__________________________________________________________________________________________________
conv1_pad (ZeroPadding2D) (None, 198, 198, 3) 0 input_3[0][0]
__________________________________________________________________________________________________
conv1 (Conv2D) (None, 96, 96, 64) 9472 conv1_pad[0][0]
__________________________________________________________________________________________________
bn_conv1 (BatchNormalization) (None, 96, 96, 64) 256 conv1[0][0]
__________________________________________________________________________________________________
...
...
__________________________________________________________________________________________________
dense_8 (Dense) (None, 512) 524800 dropout_5[0][0]
__________________________________________________________________________________________________
dropout_6 (Dropout) (None, 512) 0 dense_8[0][0]
__________________________________________________________________________________________________
dense_9 (Dense) (None, 7) 3591 dropout_6[0][0]
==================================================================================================
Total params: 99,614,599
Trainable params: 99,561,479
Non-trainable params: 53,120
__________________________________________________________________________________________________
view raw seti18.py hosted with ❤ by GitHub


Let’s now train the model for a total of 500 epochs.

class EpochModelSaver(keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
self.model.save('resnet50_finetune_full_seti_epoch_{}.h5'.format(epoch+1))
ms_epoch = EpochModelSaver()
csv_logger = keras.callbacks.CSVLogger('resnet50_finetune_full_seti_log.csv', append=True, separator=',')
history3 = model.fit_generator(
train_generator,
steps_per_epoch=math.ceil(5600 / TRAIN_BATCH_SIZE),
epochs=500,
validation_data=val_generator,
validation_steps=math.ceil(700 / VAL_BATCH_SIZE),
callbacks=[ms_epoch, csv_logger], verbose=1,
)
view raw seti19.py hosted with ❤ by GitHub

Epoch 1/100
88/88 [=======] - 139s 2s/step - loss: 2.3190 - categorical_accuracy: 0.2356 - val_loss: 1.9669 - val_categorical_accuracy: 0.2614
Epoch 2/100
88/88 [=======] - 118s 1s/step - loss: 1.9840 - categorical_accuracy: 0.3255 - val_loss: 1.7577 - val_categorical_accuracy: 0.3500
Epoch 3/100
88/88 [=======] - 118s 1s/step - loss: 1.7720 - categorical_accuracy: 0.3823 - val_loss: 1.6397 - val_categorical_accuracy: 0.3886
...
...
Epoch 498/500
88/88 [=======] - 118s 1s/step - loss: 0.3285 - categorical_accuracy: 0.8928 - val_loss: 0.3917 - val_categorical_accuracy: 0.8729
Epoch 499/500
88/88 [=======] - 118s 1s/step - loss: 0.3478 - categorical_accuracy: 0.8862 - val_loss: 0.3973 - val_categorical_accuracy: 0.8757
Epoch 500/500
88/88 [=======] - 118s 1s/step - loss: 0.3380 - categorical_accuracy: 0.8896 - val_loss: 0.3954 - val_categorical_accuracy: 0.8729
view raw setiresnet50ft hosted with ❤ by GitHub

The learning curves for our trained model can be observed in the following figure.

Evaluating Model Performance on Test Data

It is now time to put our trained models to the test. We will do so by making predictions on the test dataset and evaluating model performance based on relevant classification metrics for our multi-class classification problem.

Load Test Dataset

We start by loading out test dataset and labels leveraging our data generator we had built previously.

TEST_BATCH_SIZE = 1
IMG_DIMS = (192, 192)
test_generator = test_datagen.flow_from_directory(directory=TEST_DIR,
classes=['brightpixel', 'narrowband',
'narrowbanddrd', 'noise',
'squarepulsednarrowband', 'squiggle',
'squigglesquarepulsednarrowband'],
target_size=IMG_DIMS,
batch_size=TEST_BATCH_SIZE,
class_mode='categorical',
interpolation='bicubic',
shuffle=False, seed=42)
class_label_mapping = {v:k for k,v in test_generator.class_indices.items()}
test_data = [next(test_generator) for i in range(700)]
test_data_X = [data[0] for data in test_data]
test_data_X = np.array(np.squeeze(test_data_X, axis=0))
test_data_y = np.array([fname.split('/')[0] for fname in test_generator.filenames])
class_labels = list(set(test_data_y))
test_data_X.shape, test_data_y.shape
view raw seti20.py hosted with ❤ by GitHub


Found 700 images belonging to 7 classes.

((700, 192, 192, 3), (700,))

Build Model Performance Evaluation Function

We will now build a basic classification model performance evaluation function, which we will use to test the performance of each of our three models.

import pandas as pd
from sklearn.metrics import classification_report, confusion_matrix
def evaluate_model_results(model, test_data, test_labels,
class_label_mapping, class_labels):
predictions = model.predict(test_data, verbose=1)
prediction_labels = [class_label_mapping[idx] for idx in predictions.argmax(axis=1)]
print(classification_report(y_true=test_labels, y_pred=prediction_labels))
return pd.DataFrame(confusion_matrix(y_true=test_labels, y_pred=prediction_labels, labels=class_labels),
index=class_labels, columns=class_labels)
view raw seti21.py hosted with ❤ by GitHub


We are now ready to test our models’ performance on the test dataset

Model 1 — Partial fine-tuned VGG-19

Here we evaluate the performance of our partially fine-tuned VGG-19 model on the test dataset.

vgg19_partial_ft_model3 = keras.models.load_model('vgg19_finetune_partial_models/vgg19_finetune_partial_seti_epoch_99.h5')
evaluate_model_results(vgg19_partial_ft_model3, test_data_X, test_data_y,
class_label_mapping, class_labels)
view raw seti22.py hosted with ❤ by GitHub


An overall accuracy \ f1-score of 86% on the test dataset which is pretty decent!

Model 2 — Complete fine-tuned VGG-19

Here we evaluate the performance of our completely fine-tuned VGG-19 model on the test dataset.

vgg19_full_ft_model1 = keras.models.load_model('vgg19_finetune_full_models/vgg19_finetune_full_seti.h5')
evaluate_model_results(vgg19_full_ft_model1, test_data_X, test_data_y,
class_label_mapping, class_labels)
view raw seti23.py hosted with ❤ by GitHub


An overall accuracy \ f1-score of 85% on the test dataset which is slightly lesser than the partially fine-tuned model.

Model 3 — Complete fine-tuned ResNet-50

Here we evaluate the performance of our completely fine-tuned ResNet-50 model on the test dataset.

with tf.device('/cpu:0'):
resnet_ft_model4 = keras.models.load_model('resnet_finetune_full_models/resnet50_finetune_full_seti_epoch_497.h5')
evaluate_model_results(resnet_ft_model4, test_data_X, test_data_y,
class_label_mapping, class_labels)
view raw seti24.py hosted with ❤ by GitHub

We get an overall accuracy \ f1-score of 88% which is definitely the best model performance yet on the test dataset! Looks like the ResNet-50 model performed the best, given the fact that we did train it for 500 epochs.

Best Model Predictions on Sample Test Data

We can now use our best model to make predictions on sample radio-signal spectrograms.

fig, ax = plt.subplots(2, 4, figsize=(12, 6))
for idx, img_idx in enumerate([15, 123, 230, 340, 450, 560, 670]):
id1 = 1 if idx > 3 else 0
id2 = idx % 4
predicted_label = class_label_mapping[
np.argmax(
resnet_ft_model4.predict(
np.array([test_data_X[img_idx]])
),axis=1
)[0]
]
f = ax[id1, id2].imshow(test_data_X[img_idx], aspect='auto')
t = ax[id1, id2].set_title('Actual: {}\nPredicted: {}'.format(test_data_y[img_idx],
predicted_label),
fontsize=10)
ax[1,3].set_axis_off()
fig.tight_layout()
view raw seti25.py hosted with ❤ by GitHub

Conclusion

This brings us to the end of our two-part series on leveraging deep learning to further the search of extra-terrestrial intelligence. You saw how we converted radio-signal data into spectrograms and then leveraged the power of transfer learning to build classifiers which performed pretty well even will really low number of training data samples per class.