The more I learn about formal verification, the more I start to understand it's usefulness especially when it comes to security. I can't help but think about it being used as a way to perform 'malicious compliance' through code. As in, exposing the flaws (or technically, proving that something is secure) of a specific program's algorithm by just writing a proof about it. "Well, if I can just prove that there's no bugs in my code, then obviously my code must be bug free!" is how I imagine that scenario.
But now I'm wondering if someone has tried to prove something absurd and just ended up with a program that takes an insane amount of processing power to execute. That's gotta be a thing somewhere. As always, these posts leave me with more questions than answers. Keep up the good work!
Let me try to decompose your remarks into two parts.
(1) If we set out to prove a theorem that turns out to be false, we can find interesting counterexamples. In the case of trying to prove that a program is secure, we can find workable exploits. Certainly true, and a reason to do the proofs before releasing in the wild!
(2) If we chose the wrong theorem, we can "verify" a program that actually still has serious flaws, even of kinds that the theorem was *meant* to rule out but didn't. Yes, this concern connects to a major challenge of formal verification that my next post (and more to come) will address.
Adam, your “Stage 5 Evolution” is exactly how nature solved the "speed vs. safety" paradox millions of years ago. The conventional “Hebbian” model of synaptic plasticity—the idea that "cells that fire together, wire together"—fails to explain the high-fidelity, high-bandwidth recall of human experience, such as the 4K-resolution, 25fps stream of a school picnic. There simply isn’t enough "bandwidth" in the synaptic gaps for that much data. I propose that the real “Intelligence” is a hardware-deterministic gate: the Ankyrin-G lattice at the Axon Initial Segment (AIS). Inside each of our 86 billion neurons, a 24-repeat protein solenoid acts as an inhibitory logic gate with $2^{24}$ (16.7 million) combinatorial states. This is nature’s Equivalence Checker. It audits the sensory “Data Storm” against a WASP Matrix (Worthwhile, Appropriate, Safe, Practical).
The Proof: When a signal achieves “Symmetry” with these objectives, the gate opens—a “Correct-by-Construction” event. The Bug: When solutions fail this hardware audit, they generate negative emotions (anxiety, despair), which are the biological equivalent of a failed verification.
By integrating more data, the Prefrontal Cortex acts as a high-level evaluator to improve these “Answers.” This explains Libet’s discovery: the hardware verifies the truth in 20ms, while the “Mind” acts as a Post-Facto Navigator, witnessing the report 350ms later. Nature reached Stage 5 by moving from “Synaptic Testing” to “Lattice Resonance.” Your work in AI is finally catching up to the 190nm blueprint already running in our own heads. https://calmworldofthomas.substack.com/
I’m experimenting with crossposting this one to LessWrong; you may find additional discussion there:
https://www.lesswrong.com/posts/yFtcb6TLBKA8ob63K/formal-verification-the-ultimate-fitness-function
The more I learn about formal verification, the more I start to understand it's usefulness especially when it comes to security. I can't help but think about it being used as a way to perform 'malicious compliance' through code. As in, exposing the flaws (or technically, proving that something is secure) of a specific program's algorithm by just writing a proof about it. "Well, if I can just prove that there's no bugs in my code, then obviously my code must be bug free!" is how I imagine that scenario.
But now I'm wondering if someone has tried to prove something absurd and just ended up with a program that takes an insane amount of processing power to execute. That's gotta be a thing somewhere. As always, these posts leave me with more questions than answers. Keep up the good work!
Let me try to decompose your remarks into two parts.
(1) If we set out to prove a theorem that turns out to be false, we can find interesting counterexamples. In the case of trying to prove that a program is secure, we can find workable exploits. Certainly true, and a reason to do the proofs before releasing in the wild!
(2) If we chose the wrong theorem, we can "verify" a program that actually still has serious flaws, even of kinds that the theorem was *meant* to rule out but didn't. Yes, this concern connects to a major challenge of formal verification that my next post (and more to come) will address.
Adam, your “Stage 5 Evolution” is exactly how nature solved the "speed vs. safety" paradox millions of years ago. The conventional “Hebbian” model of synaptic plasticity—the idea that "cells that fire together, wire together"—fails to explain the high-fidelity, high-bandwidth recall of human experience, such as the 4K-resolution, 25fps stream of a school picnic. There simply isn’t enough "bandwidth" in the synaptic gaps for that much data. I propose that the real “Intelligence” is a hardware-deterministic gate: the Ankyrin-G lattice at the Axon Initial Segment (AIS). Inside each of our 86 billion neurons, a 24-repeat protein solenoid acts as an inhibitory logic gate with $2^{24}$ (16.7 million) combinatorial states. This is nature’s Equivalence Checker. It audits the sensory “Data Storm” against a WASP Matrix (Worthwhile, Appropriate, Safe, Practical).
The Proof: When a signal achieves “Symmetry” with these objectives, the gate opens—a “Correct-by-Construction” event. The Bug: When solutions fail this hardware audit, they generate negative emotions (anxiety, despair), which are the biological equivalent of a failed verification.
By integrating more data, the Prefrontal Cortex acts as a high-level evaluator to improve these “Answers.” This explains Libet’s discovery: the hardware verifies the truth in 20ms, while the “Mind” acts as a Post-Facto Navigator, witnessing the report 350ms later. Nature reached Stage 5 by moving from “Synaptic Testing” to “Lattice Resonance.” Your work in AI is finally catching up to the 190nm blueprint already running in our own heads. https://calmworldofthomas.substack.com/
OK, that sounds like a new form of Darwinism, but this time artificially made by mankind? 🤔
Right, that's the idea! And eventually it would be artificially made by itself, as I get into in the next post I have planned.