Wednesday
Room 3
15:00 - 16:00
(UTC±00)
Talk (60 min)
Skill Degradation: An Empirical Analysis of 400+ AI‑Generated Security Fixes
Pressure to ship features and gaps in secure coding knowledge are driving developers to lean on generative AI for security patches. But is this actually improving software security, or merely masking knowledge gaps?
This presentation is based on a research experiment where we systematically reviewed 400+ AI‑generated patches for real‑world vulnerabilities. We looked to answer two key questions:
1. Do AI‑generated patches actually fix the vulnerabilities?
2. Do developers learn from AI‑generated patches?
Our findings show a significant drop in remediation accuracy when developers rely solely on AI suggestions. A large number of participants could not explain how the AI‑generated patch addressed the issue.
Rather than serving as a mentor, current AI assistance risks over‑reliance, leading to a superficial understanding of vulnerabilities and passive consumption. Our findings is inline with other related research that shows a considerable drop in learning when humans rely on auto-suggestion tools.
This presentation is filled with real-world examples and data from a secure coding contest. We will discuss the implications of these findings for software security. We will also explore how to use AI tools effectively without sacrificing necessary secure coding skills.