Investigating the threat of adversarial machine learning attacks on AI-powered security information and event management systems

dc.contributor.authorKikandi Safari Isaac
dc.date.accessioned2026-01-26T07:25:58Z
dc.date.available2026-01-26T07:25:58Z
dc.date.issued2025-05
dc.descriptionFull text thesis
dc.description.abstractIn the evolving landscape of cybersecurity, Security Information and Event Management systems have emerged as the nerve centers of modern defense, correlating logs, detecting anomalies, and delivering alerts leveraging AI-powered intelligence. However, as organizations rely more on these machine learning components, a new threat arises—AML. These attacks don’t crash systems or flood networks, they manipulate data just enough to fool the AI into silence or confusion. This subtlety makes them both dangerous and difficult to detect. This research set out to simulate such a scenario by incorporating two neural network architectures: a Feedforward Neural Network and a Convolutional Neural Network as detection engines of a SIEM setting. Using a cleaned and balanced sample of the CICIDS2017 dataset, the models were trained and then tested by two evasion attack frameworks: FGSM and Projected Gradient Descent. These attacks, selected for their computational efficiency and widespread use in adversarial research, were applied under white-box conditions using the Adversarial Robustness Toolbox. The results told a striking story: the FGSM attack reduced the FFNN’s detection rate from 95% to just 4.6%, while the PGD attack caused a substantial decline in the CNN’s recall, dropping it to 25%. While the CNN fared better under FGSM, it responded by flagging benign traffic as malicious, raising the risk of alert fatigue. Metrics such as precision, recall, and attack success rates clearly illustrated how easily an AI model’s performance could be compromised with subtle adversarial effort. These findings confirm that without proper defenses, AI-driven SIEMs are critically exposed and the study recommends embedding robust countermeasures such as adversarial training, anomaly detection, and model hardening to safeguard these systems. As cyber threats grow smarter, so too must our defenses be, not only in strength, but in foresight.
dc.description.sponsorshipAdventist University of Africa
dc.identifier.urihttps://irepository.aua.ac.ke/handle/123456789/906
dc.language.isoen
dc.publisherAdventist University of Africa
dc.subjectAdversarial machine learning
dc.subjectCybersecurity threats
dc.subjectSecurity information and event management (SIEM)
dc.subjectArtificial intelligence in security
dc.subjectMachine learning attacks and defenses
dc.titleInvestigating the threat of adversarial machine learning attacks on AI-powered security information and event management systems
dc.typeThesis

Files

Original bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Safari Kikandi .pdf
Size:
2.69 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: