5 Comments
User's avatar
Neural Foundry's avatar

Sharp analogy. The interpreter framing cuts through alot of philosophical fog by grounding consciousness in actual cognitive architcture rather than abstraction. What I find interesting is how this suggests the diversity crisis, that we're about to encounter minds that dont fit our modeling language at all. Kind of forces the question of whether rights should depend on compatibility with human brain architecture or something more fundamental.

Howard Reubenstein's avatar

Adam, in terms of the mind modeler being useful to run models of "other" minds, do you consider it possible/important that the mind modeler also allows us to introspect on the "thinking" of our own minds. The experience of introspection seems to lead to the experience (perhaps illusory) of our own consciousness. (I am reminded of Brian Smith's 3-lisp)

Adam Chlipala's avatar

That's a great point, Howard, that the same mind-modeling can be turned inward. To me it seems plausible that reflecting on our own consciousness is just the experience of noting that our circuits for mind-modeling are firing -- and naturally they are an especially good fit for our own brains. I still don't see anything fundamental or special about that style of introspection or the style of mind it models, which makes me skeptical that it's important to build AIs that mimic human cognitive architecture, rather than carrying out reflective reasoning in whatever manner is most convenient.

James Torre's avatar

In the vein of intelligences very different from our own, are you familiar with the work of Michael Levin, of xenobot fame?

Adam Chlipala's avatar

Great thought: Levin's kind of intelligent system could indeed be a good example of a mismatch with the parts of human brains that evolved to track other minds.