3 Comments
User's avatar
ginadotexe's avatar

In regards to the cybersecurity challenges you've hinted at, I've learned that poetry and inaudible voice commands are starting to be used for injection attacks in AI systems. Poetry is highly contextual and can be used to essentially inject malicious prompts into an AI agent, while malicious voice commands can be sent over human-inaudible frequencies to inject malicious commands into voice recognizing AI agents. I'm not sure if there's a solution to such injection attacks, but they certainly present a cybersecurity challenge. ( https://medium.com/@albeeandrew/silent-sphinx-leveraging-adversarial-poetry-with-near-ultrasound-inaudible-trojan-nuit-attack-7c99980bfe53 )

ginadotexe's avatar

I like how you're able to simplify how generative AI agents work, and it hadn't crossed my mind that these coding agents are also subject to the same mathematical proofs (and Turing completeness) as any other normal piece of software.

When ChatGPT started creating a buzz, I heard a lot of talk about how "programmers would be replaceable" given that it (and other AI agents) could just generate code. I'm glad to learn that not only will there always be a need for skilled programmers to 'check the work of an AI' (for lack of better phrasing), but there will also need to be proper computer scientists that can actually work on the 'theoretical aspect' of generative AI.

Despite the guarantee (heh) that computer scientists will always have a career, I can't help but think that the way computer science is taught may now become a bit challenging, and might have to change as a result. I'm sure it's simple enough to detect plagiarism amongst students, but now we'd have to detect plagiarism against a (or several) coding agents as well. Have there been any changes in computer science pedagogy that you've noticed since AI agents became easily accessible to students?

Adam Chlipala's avatar

Actually, part of the position I'm looking to stake out in this post is that, through the proper design of programming tools, we can decrease the need for human effort, by making AI more effective. I do think there's a good chance that the full software-engineering lifecycle is automated almost everywhere in the next decade or so.

AI coding assistants and programming education is certainly a hot topic in the academic world. I don't claim to have any special insight there, but we are seeing a lot more reliance on exams where students aren't allowed to use AI.