The popularisation of AI has enabled businesses to expedite the code development cycle. However, this innovation comes at a cost — one that CISOs and IT leaders are concerned about. It has recently come to light that a significant amount of code generated by AI models include bugs that are easily exploited by cyber attackers. This article provides CISO-approved guidelines for secure AI system development.
Why You Should Be Wary of AI-Generated Code
When developers are facing tight deadlines, it’s easy to miss vulnerable patterns like hardcoded credentials and plaintext secrets hidden in code. Since AI models (whether advanced Large Language Models (LLMs) like GPT-4, or coding assistants) learn from large volumes of existing code, they occasionally imitate bad practices found in this training data, which in turn can result in insecure code outputs. Development teams working in industries that handle sensitive information, like healthcare or finance, should be doubly careful when employing AI-generated code because commonplace issues, or “bugs” can actually represent serious cyber security risks.
Some common security vulnerabilities found in AI outputs include:
Hardcoded Secrets
AI models don’t understand the concept of “secret” information. If a model’s training data contains code with hardcoded API keys, database credentials, or other secrets, it will likely replicate these secrets in code by embedding plaintext passwords or cloud API keys directly into a configuration file. Secure development teams must thoroughly review AI-generated code to ensure that they are using AI code generators and assistants in a secure and responsible way.
Unsafe Authentication Patterns
The context of cybersecurity is not understood by AI models; as a result, they can end up revealing sensitive data. For example, AI-generated code has often been found to store passwords in plain text, when passwords should be hashed and salted according to security best practices. A recent study centered on an AI-suggested user authentication function that compared plain text passwords. This implies that the AI system was actually storing passwords in plain text, leaving them vulnerable to cyber attackers.
Insecure Cryptographic Algorithms
Due to limitations in the training data provided, it is not uncommon for AI to choose outdated or weak algorithms when asked to implement a security function, like encryption. It’s important for development team members to verify cryptographic implementations, understanding that AI can sometimes produce overly simplistic ciphers that are easy to crack or even use deprecated libraries rife with vulnerabilities.
While the widespread use of AI code generators and assistants has led to important innovations, it has also introduced novel security vulnerabilities as well as the proliferation of insecure code. Companies that have employed AI-generated code without performing rigorous code reviews have frequently faced devastating security breaches, after being lulled into a false sense of security by the AI-generated code’s surface-level logic.
How to Withstand AI Risks with Trusted Security Frameworks
Trusted cybersecurity frameworks provide guidelines for secure AI development as well as structured approaches to building and maintaining secure systems and infrastructure. The only way to safely implement AI code generators or assistants is to implement well-established frameworks that ensure that each project, no matter how pressing, adheres to the same standards.
BD Emerson’s team of expert consultants understand guidelines for secure AI use and can help your team navigate the implementation of trusted security frameworks like:
Cloud Security Alliance (CSA)
AI workloads typically work in the cloud, which means that it’s essential to bolster cloud security to ensure that common issues like cloud misconfigurations or poor key management don’t lead to larger problems. Development teams should use a cloud-focused lens when implementing security best practices by integrating the CSA’s Cloud Controls Matrix along with current AI guidance. According to the 2024 CSA report, the best cloud security is risk-based and includes comprehensive AI audits, encompassing resilience, transparency, and accountability instead of simply ticking regulatory boxes.
AWS Well-Architected Framework
The AWS Well-Architected Framework provides a clear set of best practices for secure AI systems and AI-powered applications in the cloud. The framework’s Security Pillar encompasses identity management, data protection, threat detection, including guidelines for security guardrails like encryption and least privilege access. AWS highlights that AI workloads should inherit the same robust security posture as traditional applications when these frameworks are followed. Therefore, AI should not bypass security reviews. AWS also provides specialized tools, like the Well-Architected Machine Learning Lens, to help organizations apply these principles specifically to AI and machine learning environments.
NIST Cybersecurity Framework (CSF)
NIST CSF is an extremely popular cybersecurity framework, adopted by companies across diverse industries to manage and reduce cybersecurity risks. The core components of NIST CSF include the following steps: Identify, Protect, Detect, Respond, and Recover. This five-function approach applies just as much to AI systems as traditional IT. Implementing NIST CSF means that all AI-generated code will undergo the same rigorous risk assessment and controls as any other mission-critical software. When following NIST guidelines, your team will proactively identify security vulnerabilities introduced by AI, implement protective procedures like code reviews and secret scanning, and prepare an incident response plan for when a slip-up may occur.
Learn more about BD Emerson’s comprehensive NIST consulting services.
Why Trust is Non-Negotiable in High-Risk Industries
Finance and healthcare operate under a level of scrutiny few other industries face. From bank account credentials to medical record data, businesses in these sectors handle some of the most sensitive personal information out there. At the same time, they are high-value targets for cybercriminals.
As a result, these businesses are subject to stringent regulations like HIPAA, GDPR, and PCI-DSS. Introducing AI-generated code into this type of environment means more complexity and risk. A single vulnerability—like an unencrypted database field or a hardcoded admin password in AI-written code—could cause devastating consequences. In healthcare, that might mean the unintentional sharing of patient records. In finance, it could mean the exposure of someone’s financial account information. In either case, the cost goes far beyond remediation: it damages trust, and once that trust is broken, it’s incredibly difficult to earn back.
Companies in these industries face steep penalties for these kinds of missteps, including fines, remediation costs, loss of customer trust and, occasionally, significant harm to individuals. Securing and fulfilling customer trust is a core element of doing business in the healthcare or financial sectors. Introducing AI-generated code that contains vulnerabilities to either environment can set the stage for catastrophe. Users of technology in these sectors have become wary of AI tools as a result, and it is up to companies to reassure customers that they are doing everything possible to address and mitigate these risks, ideally by achieving and maintaining rigorous compliance standards.
Complying with a strict security framework communicates to your clients or partners that while your company is innovating its processes and tools, it also adheres to the highest levels of information security. In addition to compliance, it is important for teams to regularly test their systems and AI outputs for possible weaknesses. Performing routine audits of AI-generated code, maintaining transparency regarding the use of AI, and having clear AI-specific incident response plans prepared are other ways companies can instill confidence in their customers and partners.
How to Move Forward
For CISOs in the modern area, it’s just as important to champion AI innovation as it is to insist upon strict regulatory practices that keep artificial intelligence tools operating safely and ethically. Nowhere is this dual mission more important than in industries that handle sensitive data like the healthcare and financial sectors. When asked “What is Secure AI System Development?” by stakeholders, partners, and customers, the most successful development teams will be able to demonstrate how secure artificial intelligence tools are being implemented to increase efficiency in a controlled way.
Ultimately, integrating AI with proven security frameworks goes beyond fixing vulnerabilities—it’s about protecting the trust that underpins innovation in our most vital industries. To move forward with confidence, it’s essential to ensure the future of AI is built on a secure, reliable foundation.
Learn how to strengthen your security infrastructure without sacrificing efficiency by scheduling a consultation with us today.
