Enhancing CI/CD Security with Artificial Intelligence: State of the Art, Challenges, and Integrated Approaches

Authors

  • Iveri Jajanidze PhD Candidate (Informatics), Doctoral Program in Informatics, Georgian Technical University, Tbilisi, Georgia Author

Keywords:

CI/CD Security, DevSecOps, Artificial Intelligence, Machine Learning, Explainable Artificial Intelligence, Federated Learning, Adversarial Machine Learning

Abstract

Continuous Integration and Continuous Delivery (CI/CD) pipelines have become a core component of modern software engineering, enabling rapid and automated deployment of applications. While these practices significantly improve development efficiency, they also introduce new and complex security risks, including supply chain attacks, configuration drift, and large-scale propagation of vulnerabilities through automated pipelines [6, 7, 8]. Traditional security mechanisms, which rely primarily on static rules and signature-based detection, are increasingly inadequate in highly dynamic DevSecOps environments [3, 12]. This paper investigates the application of Artificial Intelligence (AI) and Machine Learning (ML) techniques to enhance the security of CI/CD systems. The proposed approach integrates anomaly detection, supervised classification, Explainable AI (XAI), federated learning, and adversarial robustness mechanisms into a unified analytical architecture [2, 4, 5]. The study argues that explainability and privacy-preserving learning are critical for adoption in regulated and mission-critical environments [2, 4, 7]. A conceptual and architectural framework is presented, together with an experimental evaluation strategy, demonstrating how AI-driven security analytics can provide proactive, transparent, and scalable protection for modern CI/CD pipelines.

Downloads

Download data is not yet available.

References

C. Cath et al., “Artificial Intelligence and the ‘Good Society’: The US, EU, and UK Approach,” Science and Engineering Ethics, vol. 24, no. 2, pp. 505–528, 2018.

D. Gunning et al., “XAI—Explainable Artificial Intelligence,” Science Robotics, vol. 4, no. 37, 2019.

D. Ucci, L. Aniello, and R. Baldoni, “Survey of Machine Learning Techniques for Malware Analysis,” Computers & Security, vol. 81, pp. 123–147, 2019.

H. Yang et al., “Federated Machine Learning: Concept and Applications,” ACM Transactions on Intelligent Systems and Technology, vol. 10, no. 2, pp. 1–19, 2019.

F. Doshi-Velez and B. Kim, “Towards a Rigorous Science of Interpretable Machine Learning,” arXiv preprint arXiv:1702.08608, 2017.

National Institute of Standards and Technology (NIST), Security and Privacy Controls for Information Systems and Organizations, NIST SP 800-53 Rev. 5, 2020.

ISO/IEC, ISO/IEC 27001:2022 Information Security, Cybersecurity and Privacy Protection — Information Security Management Systems — Requirements, 2022.

European Union, Digital Operational Resilience Act (DORA), Regulation (EU) 2022/2554, 2022.

I. Kartvelishvili and G. Kuchava, “Optimization of Software Delivery Quality and Speed in DevOps Using CI/CD,” in Proc. Georgian Technical University Conference, Tbilisi, 2024.

I. Kartvelishvili, M. Okhanashvili, and N. Chorkhauli, “Review and Analysis of Existing Methods for Network Attack Detection,” in Proc. International Scientific-Practical Conference, Tbilisi, 2023.

A. Bichnigauri et al., “Strengthening Cyber Defenses — The Crucial Role of Phishing Simulation in Modern Security Strategies,” Defence and Science, no. 3, 2024, doi: 10.61446/ds.3.2024.8467.

Google Cloud, “MLOps: Continuous Delivery and Automation Pipelines in Machine Learning,” Google Cloud Documentation, 2022.

Microsoft, “Secure DevOps Kit for Azure (AzSK),” Microsoft Documentation, 2023.

TensorFlow, “TensorFlow Extended (TFX) Documentation.” [Online]. Available: https://www.tensorflow.org/tfx

Kubeflow, “Kubeflow Pipelines Documentation.” [Online]. Available: https://www.kubeflow.org/docs/components/pipelines

Grafana Labs, “Grafana Documentation.” [Online]. Available: https://grafana.com/docs/

M. T. Ribeiro, S. Singh, and C. Guestrin, “LIME: Local Interpretable Model-Agnostic Explanations.” [Online]. Available: https://github.com/marcotcr/lime

S. Lundberg and S.-I. Lee, “SHAP: A Unified Approach to Interpreting Model Predictions.” [Online]. Available: https://github.com/slundberg/shap

European Union, “Artificial Intelligence Act (EU AI Act).” [Online]. Available: https://artificialintelligenceact.eu/

Downloads

Published

26-02-2026

How to Cite

Enhancing CI/CD Security with Artificial Intelligence: State of the Art, Challenges, and Integrated Approaches. (2026). Computational and Applied Science, 1(1), 138-154. https://casjournal.ge/index.php/cas/article/view/13