Investigating the threat of adversarial machine learning attacks on AI-powered security information and event management systems
| dc.contributor.author | Kikandi Safari Isaac | |
| dc.date.accessioned | 2026-01-26T07:25:58Z | |
| dc.date.available | 2026-01-26T07:25:58Z | |
| dc.date.issued | 2025-05 | |
| dc.description | Full text thesis | |
| dc.description.abstract | In the evolving landscape of cybersecurity, Security Information and Event Management systems have emerged as the nerve centers of modern defense, correlating logs, detecting anomalies, and delivering alerts leveraging AI-powered intelligence. However, as organizations rely more on these machine learning components, a new threat arises—AML. These attacks don’t crash systems or flood networks, they manipulate data just enough to fool the AI into silence or confusion. This subtlety makes them both dangerous and difficult to detect. This research set out to simulate such a scenario by incorporating two neural network architectures: a Feedforward Neural Network and a Convolutional Neural Network as detection engines of a SIEM setting. Using a cleaned and balanced sample of the CICIDS2017 dataset, the models were trained and then tested by two evasion attack frameworks: FGSM and Projected Gradient Descent. These attacks, selected for their computational efficiency and widespread use in adversarial research, were applied under white-box conditions using the Adversarial Robustness Toolbox. The results told a striking story: the FGSM attack reduced the FFNN’s detection rate from 95% to just 4.6%, while the PGD attack caused a substantial decline in the CNN’s recall, dropping it to 25%. While the CNN fared better under FGSM, it responded by flagging benign traffic as malicious, raising the risk of alert fatigue. Metrics such as precision, recall, and attack success rates clearly illustrated how easily an AI model’s performance could be compromised with subtle adversarial effort. These findings confirm that without proper defenses, AI-driven SIEMs are critically exposed and the study recommends embedding robust countermeasures such as adversarial training, anomaly detection, and model hardening to safeguard these systems. As cyber threats grow smarter, so too must our defenses be, not only in strength, but in foresight. | |
| dc.description.sponsorship | Adventist University of Africa | |
| dc.identifier.uri | https://irepository.aua.ac.ke/handle/123456789/906 | |
| dc.language.iso | en | |
| dc.publisher | Adventist University of Africa | |
| dc.subject | Adversarial machine learning | |
| dc.subject | Cybersecurity threats | |
| dc.subject | Security information and event management (SIEM) | |
| dc.subject | Artificial intelligence in security | |
| dc.subject | Machine learning attacks and defenses | |
| dc.title | Investigating the threat of adversarial machine learning attacks on AI-powered security information and event management systems | |
| dc.type | Thesis |