How to better guard against malicious deception of artificial intelligence

(05-05-2022) Utku Özbulak's PhD examined how we can better protect artificial intelligence from malicious intent, more specifically with images, for example biomedical and radar images.

Automation, the process where the amount of human labor is minimized for the production and delivery of goods, has become an important part of our society since the industrial revolution. Automated systems allow us to produce and deliver goods quickly and cheaply. As a result, prosperity has greatly improved over the last century.

In recent years, we have even taken it a step further with the use of artificially intelligent systems (AI systems or neural networks). Systems that integrate AI systems make it possible to find accurate solutions to complex problems.

"Although neural networks, by using large amounts of data, make it possible to tackle very complex challenges, it was recently discovered that they have a serious security flaw, namely their vulnerability to adversarial examples," Utku says.

In this context, the term adversarial examples refers to input created with malicious intent, with the goal of misleading automated decision-making systems.

"Adversarial examples are widely recognized as one of the major security shortcomings of neural networks. Indeed, it is challenging to distinguish true input from these adversarial examples, especially in the context of images (e.g., biomedical images and radar images)," continued Utku.

"In my PhD, I studied the phenomenon of adversarial examples in more detail in the context of deep neural networks, with particular attention to the following topics: (1) properties of adversarial examples, (2) hostile attacks on models for segmenting biomedical images, (3) hostile attacks on models for recognizing human activities in radar images, and (4) defenses against adversarial examples," Utku explains.

"More specifically, my research has led to the development of two new hostile attacks on machine learning models, as well as a new defense, and where these developments allow for a deeper understanding of the properties of adversarial examples."

"Thanks to my research, we are one step closer to understanding adversarial examples and fixing this security flaw that threatens artificial intelligence systems" Utku concludes.

Read a more detailed summary or the entire PhD

-

PhD Title: Prevalence of Adversarial Examples in Neural Networks: Attacks, Defenses, and Opportunities

-

Contact: Utku Özbulak, Wesley De Neve, Arnout Van Messem

Utku Ozbulak (Izmir, Turkey, 1991) studied at Yasar University, Turkey, from 2009 to 2014, obtaining a Bachelor's Degree in Computer Science. Immediately after graduating, he took up the role of Business Intelligence Consultant at Istanbul, working in globally recognized companies such as The Coca Cola Company, Turkish Airlines, and Bosch-Siemens Hausgeraete. After two years of industry experience, he decided to study for a Master's Degree in Data Science at the University of Southampton, UK, graduating with distinction in 2017. From September 2017 onwards, he continued his academic career at Ghent University through the pursuit of a PhD Degree in Computer Science Engineering. In this respect, he performed doctoral research at the newly opened global campus of Ghent University in South Korea (at the Center for Biosystems and Biotech Data Science) and the home campus of Ghent University in Belgium (at IDLab). His main research focus was on achieving a better understanding of adversarial examples: carefully crafted data points that force machine learning models to make mistakes during testing time. As a doctoral researcher, he contributed to two published a1 journal publications, all as a first author, and to five published conference papers, of which four as a first author.

-

Editor: Jeroen Ongenae - Final editing: Ilse Vercruysse - Illustrator: Roger Van Hecke