The Ai thread

Nycturne

Elite Member
Posts
1,138
Reaction score
1,484
Are we not just biological computers? If so why could a realistic AI human-like personality not be crated when technology exists, with those elements of curiosity?

My own hunch is that you could. But that would require what is being called "general AI", and I also question just how complex the computer system would need to be to do so.

It's not just that we have language processing, spatial awareness, etc, etc, etc. At some point you get emergent behaviors beyond what the individual components provide. And as Yoused points out, is a subtle and not well understood phenomenon to begin with, so how we go from "bundle of individual capabilities" to "consciousness" is a hard question.

I’ll step back to lights on/off standard. As human beings, our lights are on, we have consciousnes and self awareness. You could create something very human like, that reacts to its environment just like we do, and interacts with other human beings in a manner indistinguishable from a human being, but its lights are off, it may have the sensors that give it input, but it’s still just running an elaborate program. could this be us?

What I might argue though is that consciousness is a spectrum. It isn't just on or off. My cat shows awareness in ways that suggest some level of consciousness. Is at the same level as me? Who knows. But she has a personality, and a will and can initiate activity based on want or desire. I compare her often to a toddler in terms of capability (in communication, understanding of the world around her, etc). Dolphins have social behaviors that are strikingly complex. I would be shocked if we discovered that animals are not conscious in some way down the road. And as brains in animals on Earth (especially mammals) developed through shared lineages, I suspect we are more easily going to figure out in animals first as we will have more in common.

However, an LLM is a narrow AI. The fact that it does its one task in a way that is convincing is more a statement on us than on the tech. But it raises questions that if we don't know how to identify consciousness properly, and instead rely on rudimentary ideas of how to demonstrate thinking, how will we be able to identify a truly alien intelligence that doesn't think the way we (mammals) do? LLMs in particular seem convincing because we've tied thought to language in our tests (like the Turing Test), when language is just one facet of us.
 

Yoused

up
Posts
5,621
Reaction score
8,938
Location
knee deep in the road apples of the 4 horsemen
LLMs in particular seem convincing because we've tied thought to language in our tests (like the Turing Test), when language is just one facet of us.

As humans, language is our most significant feature. Even more important than absence of estrus. There is simply no abstract thought without it. Animals can hold some out-of-view connections while reasoning a problem, but they do not seem to have the capacity for depth of abstraction that human language gives us. Hence, LLMs are very important in this area – we know we can replicate mobility, sight and action well enough (or better), so the talky/understandy bit is the deep one that really matters. The only thing missing is emotion, which we might be able to code-simulate, but it would still not quite line up will the chemical signalling, IMO.

Also, this hallucination problem: when we stay awake too long, we have problems, which are solved by sleeping. Dreams are hallucinations generated by the brain reordering and decrufting itself. It appears that an individual AI instance does need to have something akin to nap time, just like us.
 

Huntn

Whatwerewe talk'n about?
Site Donor
Posts
5,288
Reaction score
5,232
Location
The Misty Mountains
My own hunch is that you could. But that would require what is being called "general AI", and I also question just how complex the computer system would need to be to do so.

It's not just that we have language processing, spatial awareness, etc, etc, etc. At some point you get emergent behaviors beyond what the individual components provide. And as Yoused points out, is a subtle and not well understood phenomenon to begin with, so how we go from "bundle of individual capabilities" to "consciousness" is a hard question.



What I might argue though is that consciousness is a spectrum. It isn't just on or off. My cat shows awareness in ways that suggest some level of consciousness. Is at the same level as me? Who knows. But she has a personality, and a will and can initiate activity based on want or desire. I compare her often to a toddler in terms of capability (in communication, understanding of the world around her, etc). Dolphins have social behaviors that are strikingly complex. I would be shocked if we discovered that animals are not conscious in some way down the road. And as brains in animals on Earth (especially mammals) developed through shared lineages, I suspect we are more easily going to figure out in animals first as we will have more in common.

However, an LLM is a narrow AI. The fact that it does its one task in a way that is convincing is more a statement on us than on the tech. But it raises questions that if we don't know how to identify consciousness properly, and instead rely on rudimentary ideas of how to demonstrate thinking, how will we be able to identify a truly alien intelligence that doesn't think the way we (mammals) do? LLMs in particular seem convincing because we've tied thought to language in our tests (like the Turing Test), when language is just one facet of us.
I’m prepared to say with only experience/exposure to rely on that mammals appear to have consciousness at least some of them, so I’ll assume it’s not uncommon. I’ve observed dogs and cats appear to be dreaming just like or surprisingly similar to ourselves. Last night I had a wonderful dream that felt pretty real despite being in what I call a dream fog. The setting was a farm, house, barn, fields, people I knew riding horses, I was surprised by a ramp in the barn that slid out of a wall to allow something to be rolled though a door down to a slightly sunken floor. More complex, but not much different than chasing rabbits or mice. ;)

Are you a fan of the Ex Machina (movie)? I describe it as the best plausible AI story I have seen. Spoilers follow. If you’ve have not seen it and think you would be interested, I suggest watch the movie before continuing reading this post. The story wowed the hell out of me because it illustrated the concept that an AI only possesses the moral structure that has been programmed into it.
But that does not exclude the possibility that a moral basis could form within the structure that has flexibility and can assign preferences based on reinforcement based on multiple criteria.

The story relies on (I forget the name used) a type of brain that looked surprisingly biological in nature, not unlike Data’s positronic brain, computers able to basically reconfigure themselves when environmental pressure is applied. And it is possible that an AI like Ava could develop a moral basis over time. When we see her in the movie, we don’t know until later that she has been given a task by her maker which impacts her behavior, guides her actions to some degree, and apparently her view of Caleb. By the end of the story, the viewer can conclude that Ava has a genuine desire to be free from her current situation,

Excellent article Follows. As per the bolded below, I still have a question about the lights being on or off. This is the rub from an individual perspective as in what separates us (humans) from a machine we create. By virtue of having consciousness would put us a step above a machine emulating consciousness. It also leads us towards the concept of having a divine quality, inserting a quality along the lines of a soul (realm of magic/super natural).) Or is there no difference between us and the intelligent machines we create? We just think our lights are on?

Includes spoilers for the movie:

Ex Machina raises questions not only about the implications and consequences of artificial intelligence, but also about the very nature of AI and what qualifies as true AI. The plot of the movie is centered around Caleb’s testing of Ava the android. Nathan wants to find out if Ava is a true artificial intelligence using a modified version of the Turing Test, in which Caleb can actually see that Ava is an android and will judge how human-like and convincing Ava’s interactions with him are with this knowledge in mind. The movie implies that Ava is in fact a true artificial intelligence, at least by Nathan’s standards, since she was able to find a means of escape essentially by seducing Caleb.

Before it is revealed that Nathan’s test all along was to see if Ava could use Caleb to escape her enclosure, Nathan asks Caleb something to the effect of: “Does Ava actually like you, or is she just simulating liking you?” According to Nathan, Ava needs to actually like Caleb to be considered a true AI. I, however, disagree with this portrayal of the qualifications of true artificial intelligence. I don’t think there is a difference between Ava’s liking Caleb and Ava’s simulating liking Caleb. To an outside perspective, these two scenarios look exactly the same, so for all intents and purposes, there is no difference between the two. The same follows for the more general idea of a robot having consciousness: if a robot can perfectly simulate having consciousness, that is just as good as the robot actually having consciousness, since the two would be indistinguishable from an outside perspective
 
Last edited:

ArgoDuck

Power User
Site Donor
Posts
106
Reaction score
168
Location
New Zealand
Main Camera
Canon
As humans, language is our most significant feature. Even more important than absence of estrus. There is simply no abstract thought without it. Animals can hold some out-of-view connections while reasoning a problem, but they do not seem to have the capacity for depth of abstraction that human language gives us. Hence, LLMs are very important in this area – we know we can replicate mobility, sight and action well enough (or better), so the talky/understandy bit is the deep one that really matters. The only thing missing is emotion, which we might be able to code-simulate, but it would still not quite line up will the chemical signalling, IMO.

Also, this hallucination problem: when we stay awake too long, we have problems, which are solved by sleeping. Dreams are hallucinations generated by the brain reordering and decrufting itself. It appears that an individual AI instance does need to have something akin to nap time, just like us.
That is a very strong claim! It may be correct, but I’ve never been sure. Let me add up front I’m in nycturne’s camp concerning language.

My first questions are: should we regard works of art or music as products of abstract thought? If so, does this mean the artist’s/composer’s (i would add scientist’s and many other -ist’s as well) process requires language?

One of my past research domains involved so-called discursive psychology, which includes a close reading and analysis (itself just more words!) of what people say in conversation or write. I mention this because a great deal of it exposes aspects such as our capacity to contradict ourselves within the same few sentences (no, Trump did not invent this) and what i call ‘lazy reasoning’, and i believe Wittgenstein would say is an example of word games. He considered a great deal of philosophy was no more than these games.

Effectively, language seemingly gives us the capacity to label ‘elements’ of reality then - as in scientific modelling - use these labels instead of reality. I guess this detachment from ‘the thing itself’ could be one aspect of what we mean by "abstract"? It’s at this point that we get into a lot of trouble, because these labels are not the thing itself.

Now, the fact that science and mathematics - and let’s add art and music but leave open whether these are linguistic - seem to ‘work’ is itself stunning, but notably it requires extraordinary levels of discipline far removed from the usual word games in which humans indulge.

This i think brings us neatly back to LLMs, as they perform these word games or lazy reasoning brilliantly. My recent brief exploration of ChatGPT struck me this way: Wow! This thing ‘reasons’ just like we do most of the time, lazily asserting associations because the words work together -whereas the reality almost certainly doesn’t.

And therein lies our hallucinations i suggest, at least in part; and it’s this capacity for facile yet persuasive ‘reasoning’ that worries me a little. If we can’t easily see our own lazy reasonings - Wittgenstein suggested this was a problem even among great philosophical thinkers - then how will we see the flaws in LLMs’ outputs, especially on those occasions we don’t know it came from an LLM?

To manage it rather than it us as it were, and for it to be a tool, i suggest we need to always state when and how we use it, and to know how and when it’s been used as part of information presented to us.
 

Yoused

up
Posts
5,621
Reaction score
8,938
Location
knee deep in the road apples of the 4 horsemen
That is a very strong claim! It may be correct, but I’ve never been sure. Let me add up front I’m in nycturne’s camp concerning language.

Let me restate my case less succinctly.

Human language is foundational to our way of thinking. A peacock spider is hardwired to do it's funny mating dance. An eagle is hardwired to pull fish out of the river. A horse is hardwired in a way that makes it more human-breakable than, say, a zebra. Humans are hardwired for verbal communication as a vital socialization tool, and that wiring precipitates quite a lot of flexibility.


You take a 1 y/o orphan from Ethiopia, settle him in Nagano, and by 12 or so, he will be speaking perfect Japanese. Just about with no explicit training. It is something we acquire effortlessly. I think I have heard of birds being trained in conversational speech, including apparent comprehension, but it is nowhere near effortless like with a child.

My thesis is that language supplies the basis for expansive understanding and a sort of extra-physical structure that forms our perception of reality.

An artist may well just take to painting, a musician to playing, without having to go through language to get there, but the source material arises in part out of their social, which is fundamentally linguistic. The artist may think "there should be a streak of blue here", the musician "this should be F#", or their thought patterns may completely bypass word things and go directly to the "right" element. But underneath it all is the broader social context that grows out of communication.

You, and I, and anyone else reading this, are, each, much less than us, and only in combination do we achieve greater things. Which may be worth considering: so far, we have looked at this AI, or that AI, but rarely these AIs. What might we gain by putting them together on a task? Or is that the scary part?

(I feel like I am not doing a good job expressing my thoughts and it may be time for me to sit down and knit them into a coherent Philosophy Doofus thesis instead of forum-ramblings.)
 
Last edited:

ArgoDuck

Power User
Site Donor
Posts
106
Reaction score
168
Location
New Zealand
Main Camera
Canon
^ Not at all - that’s very clear, and i agree. I knew you and i in our most recent posts were (necessarily) leaving out a lot. It’s good to get more of your thinking.

Hmm, where next?

To tidy up one thing, I just now glanced at dada_dave’s Stanford link and I started reading the underlying paper. It’s not at all where i was going - sorry for the confusion! - but raises interesting points. My first thought is that indeed larger models might produce outputs not within the compass of smaller ones, but they’re still deterministic. If i get the gist of the first two pages, the mistake is to think these ‘unexpected’ new outputs are evidence of an emergent, singular ‘intelligence’ whereas they of course are just isolated outputs from different parts of the model. We, with our pattern-matching (and metrics based on it) see a mind where none feasibly exists.

But that i think needs a whole other set of posts!
 

Yoused

up
Posts
5,621
Reaction score
8,938
Location
knee deep in the road apples of the 4 horsemen
If i get the gist of the first two pages, the mistake is to think these ‘unexpected’ new outputs are evidence of an emergent, singular ‘intelligence’ whereas they of course are just isolated outputs from different parts of the model. We, with our pattern-matching (and metrics based on it) see a mind where none feasibly exists.

What I get from it is that the people observing emergent properties were using tests that produce the "sharp left turns" (sudden changes in performance) and when other types of tests are used, performance scales linearly with model size, causing "emergence" to, as it were, retreat.
 

Huntn

Whatwerewe talk'n about?
Site Donor
Posts
5,288
Reaction score
5,232
Location
The Misty Mountains
The nature of consciousness is a manifold question, but the evidence that I have seen suggests that it is more affective than effective. The brain functions, at the observable level, almost exactly like a type of computer. It performs analysis and calculation, fitting to its application (e.g., the brain of a duck performs flight control, ground or water navigation, communication with other ducks, food acquisition, propagation and predator avoidance), and it is not evident that consciousness plays any kind of direct role relative to mental function.

I posit that the conscious state of a duck, or a cat, or barn spider is not different from human consciousness in any meaningful way. Those other critters are merely unable to articulate how they feel about feelings and awareness to us in a common language. My contention is that consciousness/soul derives from the basic survival instinct that has driven life forms to persist, to multiply and to compete. Because, why else would we construct fairy tales about life after death? We want our consciousness to not end abruptly but continue, in some recognizable cohesion, indefinitely. (Well, some of us do – some of us would welcome oblivion after this crapfest called life.)

If my position is valid, it raises questions. How would we go about instantiating consciousness in a machine? Would we do it as research, to discover whether the machine behaves more like us/animals with a spectator consciousness vs an interactive consciousness, and is it ethical to create life-equivalents just for the sake of finding the me-boson? In fact, is it even ethical to create consciousness machines just for the sake of finding out whether it can be done?
First we have to figure out where consciousness originates and how it works. The common answer seems to be in our heads, when we are sedated, we lose consciousness. I would think if it is a simple proposition, we would have figured it out already, where and how it works.

I have stated the desire for the continuation of consciousness based on a number of philosophical reasons. First and foremost, without it, our lives are a complete waste of time… imo. :)
 

Huntn

Whatwerewe talk'n about?
Site Donor
Posts
5,288
Reaction score
5,232
Location
The Misty Mountains
As humans, language is our most significant feature. Even more important than absence of estrus. There is simply no abstract thought without it. Animals can hold some out-of-view connections while reasoning a problem, but they do not seem to have the capacity for depth of abstraction that human language gives us. Hence, LLMs are very important in this area – we know we can replicate mobility, sight and action well enough (or better), so the talky/understandy bit is the deep one that really matters. The only thing missing is emotion, which we might be able to code-simulate, but it would still not quite line up will the chemical signalling, IMO.

Also, this hallucination problem: when we stay awake too long, we have problems, which are solved by sleeping. Dreams are hallucinations generated by the brain reordering and decrufting itself. It appears that an individual AI instance does need to have something akin to nap time, just like us.
So language is a product of brain advancement and I agree it is a great feature/advantage. I know that there will be scientists attempting to emulate this And apply it to AI because this is what we do.

Let me restate my case less succinctly.

Human language is foundational to our way of thinking. A peacock spider is hardwired to do it's funny mating dance. An eagle is hardwired to pull fish out of the river. A horse is hardwired in a way that makes it more human-breakable than, say, a zebra. Humans are hardwired for verbal communication as a vital socialization tool, and that wiring precipitates quite a lot of flexibility.


You take a 1 y/o orphan from Ethiopia, settle him in Nagano, and by 12 or so, he will be speaking perfect Japanese. Just about with no explicit training. It is something we acquire effortlessly. I think I have heard of birds being trained in conversational speech, including apparent comprehension, but it is nowhere near effortless like with a child.

My thesis is that language supplies the basis for expansive understanding and a sort of extra-physical structure that forms our perception of reality.

An artist may well just take to painting, a musician to playing, without having to go through language to get there, but the source material arises in part out of their social, which is fundamentally linguistic. The artist may think "there should be a streak of blue here", the musician "this should be F#", or their thought patterns may completely bypass word things and go directly to the "right" element. But underneath it all is the broader social context that grows out of communication.

You, and I, and anyone else reading this, are, each, much less than us, and only in combination do we achieve greater things. Which may be worth considering: so far, we have looked at this AI, or that AI, but rarely these AIs. What might we gain by putting them together on a task? Or is that the scary part?

(I feel like I am not doing a good job expressing my thoughts and it may be time for me to sit down and knit them into a coherent Philosophy Doofus thesis instead of forum-ramblings.)

First of all I am more or less just rambling. I don’t want to sound like I am arguing with you or being critical, just thinking about what you said and my impressions. You say Eagles are hard wired to catch fish in the river, are they, or are they just carrying out a method of feeding themselves that works. Is this the only way they feed themselves? Isn’t there a developmental process that is subject to being updated? It maybe slower than humsns but it is there. I was visiting my son last week and he has a cat that goes around the kitchen opening all the cabinet doors.with his paw. I’ve seen videos of cats open doors but this was a first where I watched it.. I don’t think I have a counter point to what you were saying, just thinking out loud so to speak. :)

Now regarding, AI and any ability to innovate, how to go about this with programming? I’m not a programmer, buy it seems like there should be a way to develop a flexible framework that allows for new combinations of how to do things or to discover new better way to do things. I mean the human/ mammal brain is just a biological computer yes? Or is there any evidence of some extra quality, we possess, that is unidentifiable at least at this point that gives us consciousness?

This is the big question/mystery Imo- consciousness. there are circuits that compute, but what gives us the ability to think in terms of self? I do believe this is shared with other mammals.
 

Yoused

up
Posts
5,621
Reaction score
8,938
Location
knee deep in the road apples of the 4 horsemen
So language is a product of brain advancement

Please make sure not to get this backwards. "Brain advancement" (or, as some of us say, "learning") is a consequence of language. At least, at the abstract level. It seems like we can learn survival-in-the-wilderness skills and some other stuff without language and communication, but to get to the level we are at now language is essential. Our brains have not been "advancing" biologically/structurally for tens of millenia.

First of all I am more or less just rambling. I don’t want to sound like I am arguing with you or being critical, just thinking about what you said and my impressions. You say Eagles are hard wired to catch fish in the river, are they, or are they just carrying out a method of feeding themselves that works. Is this the only way they feed themselves?
Eagles might not have been the best example, and there may be some learning involved, but they are solitary creatures that mostly just fly around and grab lunch from the river or sometimes the ground. And breed, from time to time. Not much else. I guess my point was they seem to have a brain-ceiling, in part because they are asocial.

Now regarding, AI and any ability to innovate, how to go about this with programming?
You could, at least in theoy, build a structure of some sort with lower levels capable of assembling (perhaps even directly coding, as necessary) modules to form a composite tool designed for a task, and then discarding it when no longer useful. Which is to say, we could probably make an AI that could extend its own abilities.

The one thing we have yet to put into a machine, TBMK, is inspiration. As machines, they perform according to our demands, and never produce undirected output. They do not feel like painting a picture or exploring the internet, they do it upon request. It is not clear whether self-actualization of an artificial thingie is feasible, or in fact, whether it is desirable.
 

ArgoDuck

Power User
Site Donor
Posts
106
Reaction score
168
Location
New Zealand
Main Camera
Canon
^ Agree with almost everything you write, but still have a problem with your strong theory of language. There’s a confound here. Because we are social beings - and you make excellent points about its importance - communication is an essential glue, and with language we have a particularly powerful and subtle (expressive) version of it. However, the fact it is deeply embedded in our sociality, and that it also strengthens abstract thought - probably considerably - through our cooperative/competitive behaviours, doesn’t necessarily make it prior. This is a difficult point to tease apart!

Our measures of ‘intelligence’ or cognitive capacity are - not surprisingly - language based. Thus, our language-free, loner in the wild necessarily starts with a huge disadvantage in any test we might currently perform. There have been attempts to devise culture fair tests such as ‘Ravens’, but even supposing such a test were applicable in this scenario of someone with wilderness skills enhanced abnormally (i hypothesize) through abstraction and generalization, there wouldn’t be sufficient common social or cultural (linguistic) ground with this loner to even begin to administer it. I think this bears on nycturne’s point about communicating with an alien intelligence as well.

On the ‘inspiration in a machine’ point, here’s an interesting link i chanced across yesterday. It bears on my claim concerning creativity in science and whether it’s reproducible with narrow (in this case prompt-driven) AI. The underlying paper is in preprint on arxiv and free to download. The discussion (pp.8-11) is good, bearing on both the creative advantages and strong creative limitations of the 5-6 AIs included, and echoing many of your own points.
 

Huntn

Whatwerewe talk'n about?
Site Donor
Posts
5,288
Reaction score
5,232
Location
The Misty Mountains
Please make sure not to get this backwards. "Brain advancement" (or, as some of us say, "learning") is a consequence of language. At least, at the abstract level. It seems like we can learn survival-in-the-wilderness skills and some other stuff without language and communication, but to get to the level we are at now language is essential. Our brains have not been "advancing" biologically/structurally for tens of millenia.


Eagles might not have been the best example, and there may be some learning involved, but they are solitary creatures that mostly just fly around and grab lunch from the river or sometimes the ground. And breed, from time to time. Not much else. I guess my point was they seem to have a brain-ceiling, in part because they are asocial.


You could, at least in theoy, build a structure of some sort with lower levels capable of assembling (perhaps even directly coding, as necessary) modules to form a composite tool designed for a task, and then discarding it when no longer useful. Which is to say, we could probably make an AI that could extend its own abilities.

The one thing we have yet to put into a machine, TBMK, is inspiration. As machines, they perform according to our demands, and never produce undirected output. They do not feel like painting a picture or exploring the internet, they do it upon request. It is not clear whether self-actualization of an artificial thingie is feasible, or in fact, whether it is desirable.
I agree. So in essence, language is a stepping stone in the developmental process. I also think that evolution is somehow Involved in this. Obviously there is a lot of things we don’t understand about how brains work, especially humans brains as compared to other mammal brains. And eventually we may understand exactly how it works, and be able to map the pathways and logic gates, and actually see what elements need to come together to produce consciousness and conceptualization. Unless consciousness exists at the level of magic, a process akin to having a soul, which I would not automatically rule out. :)

Once this is known and we can see this can happen in a small 3lb package someone will try to reproduce that, but acknowledged, this is a long time from happening. Even without understanding consciousness, I predict we will do a good job of emulating it in a machine.

In the meantime, and the current AI development, humans will work on specialized applications first focused on emulating human speech and being able to respond in a human manner, then tie this into functions like math and science calculations that compunters are all ready good at and find more ways to disenfranchise their fellow human beings. And while human beings may not be completely replaced this will be just like in manufacturing where industrial robots will mostly replace people.

And I think it’s a mistake to assume (as we already see the process) that when a1000 good paying manufacturing jobs evaporate that those people wil automatically slide into another equally well paying job. Nope, that is not happening. From the corporations standpoint, profit is profit and fellow human beings are expendible when it comes to my accumulation of personal wealth. :unsure:
 
Last edited:

Yoused

up
Posts
5,621
Reaction score
8,938
Location
knee deep in the road apples of the 4 horsemen
Even without understanding consciousness, I predict we will do a good job of emulating it in a machine.
Something kind of bothers me about that.

I have a singularity. Everything that I hear, read, see, etc, converges on that singularity, and my thinking seems to take place within it, though that is probably an illusion. All I know is that I am in here, and everything else is not-in-here; I infer that you, out there, are experiencing much the same kind of singular existence. But that is just a guess.

If we emulate consciousness, we will have a machine that says it is having the same sort of experience, but how can we tell? I mean, I assume that your singular existence experience fundamentally like mine – but, is it? How can we possibly know? And what is the moral/ethical difference between something that claims to be self-aware snd something that actually is?
 

dada_dave

Elite Member
Posts
2,163
Reaction score
2,148
A bit more on the prosaic side of the latest AI developments:

1684797416516.png


I should add this for completeness:

After all this, Mathenge and his colleagues feel pride in the work they did. And it was indeed effective. Today, ChatGPT refuses to produce the explicit scenes the team helped weed out, and it issues warnings about potentially illegal sexual acts. “For me, and for us, we are very proud,” Mathenge said. They’re proud, but still hurting.
 

Huntn

Whatwerewe talk'n about?
Site Donor
Posts
5,288
Reaction score
5,232
Location
The Misty Mountains
Something kind of bothers me about that.

I have a singularity. Everything that I hear, read, see, etc, converges on that singularity, and my thinking seems to take place within it, though that is probably an illusion. All I know is that I am in here, and everything else is not-in-here; I infer that you, out there, are experiencing much the same kind of singular existence. But that is just a guess.

If we emulate consciousness, we will have a machine that says it is having the same sort of experience, but how can we tell? I mean, I assume that your singular existence experience fundamentally like mine – but, is it? How can we possibly know? And what is the moral/ethical difference between something that claims to be self-aware snd something that actually is?
You have enterered the realm of What Do I Know? Look at solipsism. :D

You maybe just talking to your self in a simulation designed to keep you company while in the state of living. . Or this reality that I assume I share with you and others is filled with similar self aware entities in a place we call Earth,, on a journey that on a shallow level has an ending. Some of us think it has a definite return to non-existence and that is it, end, over, finished. While this is certainly possible, I say they are jumping the gun.

As you probably know I prefer to withhold judgement, hence my Agnosticism. The interesting part about the people who claim that a human lifetime is all the consciousness we have is based on what they see on the face of it. Are are born, basically coming from nothingness, we live and experience for a century or less, then return to that nothingness. Well yes, there is a change, but I am not prepared to say what that change is other than they no longer inhabit their mortal body that we knew.

Also we are talking about infinite time, or so we believe. If we arose from nothingness as a coherent entity, with infinity to work with why not again? It is just as likely as the assumption that we get to do this once, which is fantastic in itself, but it’s hard for me to draw a conclusion other than I acknowledge a change happens, but to what I cant say, just a departure from here, and what we know as the departed, has just departed as far as we can tell.

Now regarding actual self awareness vs emulating it, that is my lights on or off question. Regarding consciousness. If you could duplicate all the aspects of the human brain, and give it the sensors we have, would it have consciousness as we know it, or be just a machine running a program, would it have a sense of self just like you and I do? And if it has a sense of self, in a manufactured framework, how is that different from us having our sense of self? Until we can identify consciousness and its components and understand if is is solely contained within us or is somehow connected to something external to our bodily functions, then we really don‘t know much of anything and imo we should not be making assumptions about the end of our existence.
 

Nycturne

Elite Member
Posts
1,138
Reaction score
1,484
I agree. So in essence, language is a stepping stone in the developmental process. I also think that evolution is somehow Involved in this. Obviously there is a lot of things we don’t understand about how brains work, especially humans brains as compared to other mammal brains. And eventually we may understand exactly how it works, and be able to map the pathways and logic gates, and actually see what elements need to come together to produce consciousness and conceptualization. Unless consciousness exists at the level of magic, a process akin to having a soul, which I would not automatically rule out. :)

I'm not sure I'd call it language, but rather symbols that can represent ideas/concepts. Language is nice because it can then be used to communicate those concepts to others, and leads to our civilization, as well as representing those ideas in our internal thought processes thanks to the efficiency of it. However, I think it's a mistake to confuse language with the symbols words represent. Especially when you have concepts like "mental image", which itself is a different kind of symbol our minds can use to represent what we've seen after it is no longer present. And those mental images are more important for things like spatial awareness and placing yourself within the world around you than language, and helps form the foundation of your self-image and the ability to identify yourself in a mirror.

To illustrate this point a bit further: I see something traveling down the road. My mind generates a mental image that represents what I see so I can reason about it. Now, I tell you what I saw by converting that into symbols in the form of words: "I saw a blue car". What image does that evoke in your mind? Why? Does it matter if a child says it versus an adult?

Now I show you a picture of what I saw. How close is it to your own mental image? These words, these symbols, are part of our own neural network. We interpret them differently, we categorize differently, based on our experiences. Sometimes they match, sometimes they don't. No wonder we split into tribes and attack one another when we can't develop mental models that agree with each other. Social contracts require that we all agree on the symbology behind the contract.

f3c5d6a80e518d85e336b78e56c9fb04.jpg

For reference, I picked this image because I see them all over these days. It may as well be what Americans call a car.

Ultimately, a machine intelligence will likely have a different mental model than humans. There will also likely be more congruence between different machine intelligences that emerge from the same sea of data than with humans. What would language look like between entities that share the same experience/data?

Something kind of bothers me about that.

I kinda have to agree. But more in the sense that we will be doing a blind walk that creates a lot more trouble for us. We will fall for parlor tricks and fail to recognize real intelligence as it emerges, possibly committing the equivalent of animal cruelty. That said, at least with today's technology, I wouldn't be surprised if you need a computer in scale and size similar to the city-sized Deep Thought to happen across this via a random walk.

If we emulate consciousness, we will have a machine that says it is having the same sort of experience, but how can we tell? I mean, I assume that your singular existence experience fundamentally like mine – but, is it? How can we possibly know? And what is the moral/ethical difference between something that claims to be self-aware snd something that actually is?

That's the trick, isn't it? Ultimately we extend that trust to other people, or fail to, based on how we feel at the time. We dehumanize to justify behaviors, or we humanize to justify others. This is the ground reality we face today without a new intelligence that is created by people getting into the picture. We have trouble behaving consistently towards each other, and to other animals that have varying levels of intelligence. Especially with the latter.

From a moral/ethical perspective, I'm not sure I see the difference in the absence of strong evidence to the contrary. Although right now, one key metric in my mind here is animus/anima that drives the entity. A language processing network that is hooked up to an "ear" and "mouth" with nothing else hooked up to it misses so much that goes into "thought", that I would ask for evidence of animus/anima.
 

Eric

Mama's lil stinker
Posts
11,433
Reaction score
22,072
Location
California
Instagram
Main Camera
Sony
Interesting piece on Photoshop and AI, the ease in which you can manipulate something so realistically now is getting scary. As a photographer I'm having a hard time noticing the difference in many of the more recent photos.

 

Huntn

Whatwerewe talk'n about?
Site Donor
Posts
5,288
Reaction score
5,232
Location
The Misty Mountains
I'm not sure I'd call it language, but rather symbols that can represent ideas/concepts. Language is nice because it can then be used to communicate those concepts to others, and leads to our civilization, as well as representing those ideas in our internal thought processes thanks to the efficiency of it. However, I think it's a mistake to confuse language with the symbols words represent. Especially when you have concepts like "mental image", which itself is a different kind of symbol our minds can use to represent what we've seen after it is no longer present. And those mental images are more important for things like spatial awareness and placing yourself within the world around you than language, and helps form the foundation of your self-image and the ability to identify yourself in a mirror.

To illustrate this point a bit further: I see something traveling down the road. My mind generates a mental image that represents what I see so I can reason about it. Now, I tell you what I saw by converting that into symbols in the form of words: "I saw a blue car". What image does that evoke in your mind? Why? Does it matter if a child says it versus an adult?

Now I show you a picture of what I saw. How close is it to your own mental image? These words, these symbols, are part of our own neural network. We interpret them differently, we categorize differently, based on our experiences. Sometimes they match, sometimes they don't. No wonder we split into tribes and attack one another when we can't develop mental models that agree with each other. Social contracts require that we all agree on the symbology behind the contract.

f3c5d6a80e518d85e336b78e56c9fb04.jpg

For reference, I picked this image because I see them all over these days. It may as well be what Americans call a car.

Ultimately, a machine intelligence will likely have a different mental model than humans. There will also likely be more congruence between different machine intelligences that emerge from the same sea of data than with humans. What would language look like between entities that share the same experience/data?



I kinda have to agree. But more in the sense that we will be doing a blind walk that creates a lot more trouble for us. We will fall for parlor tricks and fail to recognize real intelligence as it emerges, possibly committing the equivalent of animal cruelty. That said, at least with today's technology, I wouldn't be surprised if you need a computer in scale and size similar to the city-sized Deep Thought to happen across this via a random walk.



That's the trick, isn't it? Ultimately we extend that trust to other people, or fail to, based on how we feel at the time. We dehumanize to justify behaviors, or we humanize to justify others. This is the ground reality we face today without a new intelligence that is created by people getting into the picture. We have trouble behaving consistently towards each other, and to other animals that have varying levels of intelligence. Especially with the latter.

From a moral/ethical perspective, I'm not sure I see the difference in the absence of strong evidence to the contrary. Although right now, one key metric in my mind here is animus/anima that drives the entity. A language processing network that is hooked up to an "ear" and "mouth" with nothing else hooked up to it misses so much that goes into "thought", that I would ask for evidence of animus/anima.
I agree with the distinction between language and symbols, but would argue that the written form of language is symbols so they seem to be closely related in the developmental process at least for humans. Hey that’s a blue truck, not a car! :)
 

Huntn

Whatwerewe talk'n about?
Site Donor
Posts
5,288
Reaction score
5,232
Location
The Misty Mountains
Interesting piece on Photoshop and AI, the ease in which you can manipulate something so realistically now is getting scary. As a photographer I'm having a hard time noticing the difference in many of the more recent photos.


So it’s like automation in that it reduces skills of the human involved, AI is in the position of making the decisions for you. This can be a great thing if you want to edit photos and don’t want to put in the time of learning the techniques, but I’ll project it will impact someone’s job adversely somewhere. The emphasis I want to make is that the human skill is being lost, taken over by a machine.

How far is too far our dependency on our machines? There was a Star Trek episode where a civilization had a caretaker intelligence that did everything for them and somewhere along the way they forgot how to do much of anything on their own.
 
Last edited:

Eric

Mama's lil stinker
Posts
11,433
Reaction score
22,072
Location
California
Instagram
Main Camera
Sony
So it’s like automation in that it reduces skills of the human involved, AI is in the position of making the decisions for you. This can be a great thing if you want to edit photos and don’t want to put in the time of learning the techniques, but I’ll project it will impact someone’s job adversely somewhere. The emphasis I want to make is that the human skill is being lost, taken over by a machine.
Yes, I think there's no question it will displace people but at the same time I see it as an opportunity to be more creative, at least from the perspective of being an artist.

I played around with it today and it's so good that it's scary, as a photographer I only ever use these apps for color correction and cleanup, never to add or remove artifacts. However, I took a coastline photo and drew a box around a section on the top of a cliff and typed in "add a lighthouse" and right out of the box it was so realistic it would've been hard to disprove that it's real.

This begs the question of authenticity and the article I posted states that camera manufacturers are looking at adding detection that will watermark images it deems fake and I think this is the smart way to go. The only issue is so far no phone manufacturers are getting onboard with it, likely because they already add so much AI to their photos by default.
 
Top Bottom
1 2