Why AI can’t replace humans in secure code development
GUEST OPINION: For developers, AI coding tools are the flavour of the month, and their use can assist improve productivity, but only if utilised safely.
At Secure Code Warrior, we’ve seen productivity gains from using AI technology in areas like reporting and overall it’s been a rewarding exercise to experiment with it across the business. It’s most effective as a companion tool rather than a direct replacement for an experienced person.
Indeed, companies that have rushed to shed staff in favour of AI replacements have likely found themselves in a predicament as they worked to separate errors from perceived efficiencies.
However, one of the most challenging aspects of integrating AI tools into business operations is determining which outputs are trustworthy, which tools you can trust to become part of your tech stack, the strengths and weaknesses of each tool, and how you can ensure consistent results if different tools or processes are being used.
Optimally, it’s valuable to explore the most effective ways to test each AI engine and analyse when developers can be trusted with responsible AI use, based on their level of security awareness, critical eye and insight into their project as a whole.
Our initial experimentation with AI showcased its limitations quite plainly, and subsequent tests have revealed that similar contextual issues remain despite technological upgrades. In some cases, AI initially seemed like the optimal solution, but has ultimately proved to be less effective than human intervention.
It’s actually become clear that while AI coding tools can provide something of a productive ‘pair programming’ experience, the output must be assessed and overseen by security-aware developers. And that’s something we help achieve within development teams.
Through our range of experiments, we have formulated the five key considerations when it comes to AI vs human suitability.
1. Is the outcome tactical and can be automated or strategic and requires critical thinking?
Building a simple calculator, for example, is different—especially in terms of complexity and risk—from navigating the compliant configuration of a payment gateway.
Exercise discretion and resist the temptation to outsource complex issues to AI tools.
2. What stage is the project/development life cycle at?
Do I need suggestions or do I need the most secure, polished result? If it’s a final product, then this is the realm of a trained, security-aware human with experience.
AI is fine for nutting out some initial concepts, but everything must be fine-tuned and assessed for its suitability and security in the context of the overall project.
3. What is my experience level and how will the AI tool be assisting me?
If you have low security awareness and little applied skill in secure coding best practices, there’s a chance you can do significant damage at a speed and rate of productivity not previously seen.
Working with AI coding assistants should be restricted until a baseline of security skills is proven beyond doubt.
4. Does the coding language have sufficient public reference data to generate secure code?
LLMs are only as good as their training data and there’s a huge margin for error. If you’re working in an obscure language, there will be less information available and fewer developers able to assist.
5. Do I trust the result is secure?
The answer should be ‘no’ as humans should handle big-picture tasks such as identifying and enforcing security best practices.
Will AI replace developers for security-related tasks?
The answer is ‘no’. Job displacement shouldn’t be a concern unless developers aren’t making any effort to advance their own skill sets or learn how to leverage AI effectively and responsibly.
AI is a helping hand on quick fixes or as a programming partner, not as the foundation or crutch behind one’s development skills.
Only humans can (and should) provide valuable oversight when dealing with areas such as compliance requirements for data and systems, design and business logic, and threat modelling practices for developer teams.
Experienced, security-aware developers with honed problem-solving skills will be in demand and more productive as the technology advances, but ‘set and forget’ software builds would only be attempted by those who don’t care about quality or security.
AI coding tools are here to stay, but they’re best role is to be an assistant to experienced humans.
Source link