Nycturne
Elite Member
- Joined
- Nov 12, 2021
- Posts
- 1,392
Are we not just biological computers? If so why could a realistic AI human-like personality not be crated when technology exists, with those elements of curiosity?
My own hunch is that you could. But that would require what is being called "general AI", and I also question just how complex the computer system would need to be to do so.
It's not just that we have language processing, spatial awareness, etc, etc, etc. At some point you get emergent behaviors beyond what the individual components provide. And as Yoused points out, is a subtle and not well understood phenomenon to begin with, so how we go from "bundle of individual capabilities" to "consciousness" is a hard question.
I’ll step back to lights on/off standard. As human beings, our lights are on, we have consciousnes and self awareness. You could create something very human like, that reacts to its environment just like we do, and interacts with other human beings in a manner indistinguishable from a human being, but its lights are off, it may have the sensors that give it input, but it’s still just running an elaborate program. could this be us?
What I might argue though is that consciousness is a spectrum. It isn't just on or off. My cat shows awareness in ways that suggest some level of consciousness. Is at the same level as me? Who knows. But she has a personality, and a will and can initiate activity based on want or desire. I compare her often to a toddler in terms of capability (in communication, understanding of the world around her, etc). Dolphins have social behaviors that are strikingly complex. I would be shocked if we discovered that animals are not conscious in some way down the road. And as brains in animals on Earth (especially mammals) developed through shared lineages, I suspect we are more easily going to figure out in animals first as we will have more in common.
However, an LLM is a narrow AI. The fact that it does its one task in a way that is convincing is more a statement on us than on the tech. But it raises questions that if we don't know how to identify consciousness properly, and instead rely on rudimentary ideas of how to demonstrate thinking, how will we be able to identify a truly alien intelligence that doesn't think the way we (mammals) do? LLMs in particular seem convincing because we've tied thought to language in our tests (like the Turing Test), when language is just one facet of us.