If there are actually no bugs, can’t that create a situation where it’s impossible to break it? Not to say this is actually a thing AI can achieve, but it doesn’t seem like bad logic.
Even if there’s such a thing as a program without bugs, you’d still be overlooking one crucial detail - no matter the method, the end point of cybersecurity has to interface with humans. Humans are SO much easier to hack than computers.
Let’s say you get a phone call from your boss - It’s their phone number and their voice, but they sound a bit panicked. “Hey, I’m just about to head into a meeting to close a major deal, but my laptop can’t access the server. I need you to set up a temporary password in the next two minutes or we risk losing this deal. No, I don’t remember my backup - it’s written down in my desk but the meeting is at the client’s office.”
You’d be surprised how many people would comply, and all of that can be done by AI right now. It’s all about managing risk - there’s never going to be a foolproof system.
I’d guess that hypothetical AI cybersecurity verification of code would be like that, where there are probably no bugs, but it’s not a totally sure thing. But even if you can’t have mathematical certainty there are no bugs, that doesn’t mean every or most programs verified this way are possible to be exploited.
Schrödinger’s AI: It’s so smart it can build perfect security, but it’s too dumb to figure out how to break it.
If there are actually no bugs, can’t that create a situation where it’s impossible to break it? Not to say this is actually a thing AI can achieve, but it doesn’t seem like bad logic.
Even if there’s such a thing as a program without bugs, you’d still be overlooking one crucial detail - no matter the method, the end point of cybersecurity has to interface with humans. Humans are SO much easier to hack than computers.
Let’s say you get a phone call from your boss - It’s their phone number and their voice, but they sound a bit panicked. “Hey, I’m just about to head into a meeting to close a major deal, but my laptop can’t access the server. I need you to set up a temporary password in the next two minutes or we risk losing this deal. No, I don’t remember my backup - it’s written down in my desk but the meeting is at the client’s office.”
You’d be surprised how many people would comply, and all of that can be done by AI right now. It’s all about managing risk - there’s never going to be a foolproof system.
Rice’s Theorem prevents this… mostly.
I’d guess that hypothetical AI cybersecurity verification of code would be like that, where there are probably no bugs, but it’s not a totally sure thing. But even if you can’t have mathematical certainty there are no bugs, that doesn’t mean every or most programs verified this way are possible to be exploited.