Accurate Detection Methods for Aerial Image Synthesis Using Expert Visual Perception

GAN Synthesis of Aerial Imagery

Image generation techniques such as generative adversarial networks (GANs) have become sufficiently sophisticated to cause growing concerns around the authenticity of image data. While there has been a lot of attention given to the implications of this technology when applied to faces, as seen in debates surrounding technologies such as “deepfakes”, there has been a lesser focus placed on the security issues that may arise with when these models are applied to other forms of image data such as aerial imagery.

Aerial image data is used extensively for tasks such as remote sensing and mapping and can be found across multiple industry sectors from security and intelligence, economic assessment of regions and also for disaster warnings. The assumption that aerial image data is authentic and the heavy reliance on the use of open source imagery leaves the earth imagery domain vulnerable to malicious uses of state of the art image synthesis algorithms. This creates a need for a greater focus on developing novel methods to counter and detect the use of such models before they can be deployed for harmful and destructive purposes.

Detecting GAN images using expert human knowledge

When it comes to the detection of GAN generated images, there seems to exist a difference in human and AI methods. State of the art models such as NVIDIAs StyleGAN2 are easily capable of creating photorealistic generated samples that can fool humans but are also easily identifiable by simple convolutional neural network (CNN) image classifiers. In other scenarios the opposite is found to be true. Generated samples manage to fool CNN based methods but are visually distinguishable from real samples that humans experienced with handling similar data are able to achieve better accuracy in detection tasks.

To explore the uses of expert knowledge in GAN image detection we have created two separate online studies that anyone can take part in. The first study is run on pavlovia.org and looks to establish what relationship there is between experience and classification accuracy in a paired GAN image detection study. To take part in this study (15-20 mins) please follow the link below:

https://pavlovia.org/run/Matty0512/realfakev2/html/

The second study aims to further explore how experience influences detection performance by looking at what GAN artefacts in synthesised aerial images betray them as synthetic. This study is hosted on Zooniverse.org and can be participated in after completing the initial survey in the link below:

https://formfaca.de/sm/_kBsk76eo

For any additional information or queries please feel free to contact me,
Matthew Yates

Create your website with WordPress.com
Get started