The Ai thread

Using ChatGPT or many of these other engines that create the AI as an example, there is a learning curve to really understand how to properly use it. I would think writers or those proficient with it are best suited for that type of role, to me this is where you find the gap and fill it. It's definitely not easy and a change we have to adapt to but it can be done if one has the initiative.

I wouldn't be surprised if Stanford Continuing Studies offers night classes on that within the next quarter or two.

EDIT:

I spoke too soon:

 
I think the bigger issue is AI creating a narrative, possibly padded with deep fakes, and then the viewer(s) decide to load up their arsenal and go do something about it.

I haven’t toyed with any of the AI tech out there, but I imagine it wouldn’t be too difficult to do something like “Here’s my worldview. Now only provide me with data that backs that up.” and then send that out into the world. Humans are already doing it. AI could do it better/worse.
 
Putting aside the notion of a dystopian future, we're going to see more of what's in this article, just with extraordinarily realistic bots scamming lonely folks. When somebody doesn't *want* to know that they are chatting with AI, because they are emotionally invested, it becomes a substantial problem. Nobody likes to admit that they're being hoodwinked, particularly when it's at such a personal level.


Scammers won't need to hire humans, when they can cheaply and efficiently have AI doing the catfishing on a massive scale. No amount of legislation is going to solve a problem that is already illegal.
 
The problem is us, human beings, capitalism, and human greed. My position is that capitalism without extreme regulations and caps on wealth, minus the view for civilization as a whole will not be able to carry us into the future. Too much ME and not enough WE in our economic calculations.

See, I just wrote your alt-topia story: Nine of the largest investment banks acquire huge AI programs, train them on market dynamics, turn their trading operations over to the machines, and go yachting. As the banks gain higher and higher profits, some AI-savvy hacker figures out how to mess with the models by providing the machines with corrupted data, causing the machines to learn that their behavior patterns will lead to a market crash, so they adjust their trades to yield a proper balance of gains and losses.

The bankers return from vacation to find their profits have actually been steadily diminishing, the market is more robust than ever, and they cannot figure out how to pull the plug because no one know how to handle trades anymore.
 
I think the trick is that if your ML model does analysis on structured data, you still need controllers. So much like Excel drastically changed accounting offices to need fewer folks with higher skillsets, ML financial analysis does the same as the folks running the analysis need to understand how the data works to build/run the analysis you want.

Hooking up a LLM to replace that white collar job adds complexity as now you no longer have someone that:

A) Asks clarifying questions when they don't 100% understand the analysis you want to run.
B) Can be imaginative and think ahead of future questions and generate value that way.
C) Can take responsibility when things go wrong to fix it, or be a scape goat if that's how you run things in your particular business.

In other areas such as legal matters, I wonder how long before someone screws up a contract or other legal text by not catching a hallucination in GPT's output or the like.
Heh, not saying that there wouldn't be a learning period that you have the carbon based life forms still doing their work in parallel....the roles are phased over to AI when you get up around the 99.9% success factor (if you're being precise) - not that humans are (usually) that accurate. :)

You'll still need your AI engineers that understand all of the models - compliance legislation changes all of the time - you'll have to keep tweaking things - but a handful of engineers vs hundreds/thousands of employees. You'll be able to pay the engineers very well indeed. :)
 
Heh, not saying that there wouldn't be a learning period that you have the carbon based life forms still doing their work in parallel....the roles are phased over to AI when you get up around the 99.9% success factor (if you're being precise) - not that humans are (usually) that accurate. :)

When I was working with ML a couple years ago (nothing on this scale), we were discussing things in terms of precision vs recall. There are likely novel ways to improve both we can still pursue, but generally you need to make a tradeoff between the two. And with the sort of low-stakes stuff we were dealing with, 70% of either was considered relatively good, but to get better meant the other metric tanked hard. 1 nine is considered impressive at this point, 3 nines seems out of reach for the time being.

That said, while humans are inaccurate, they are more likely to do things like double-check their work and identify the error. Or even just check their assumptions.

My concern with AI is less about getting high precision/recall, but more about being able to trace the work properly to double-check it in the first place. Transparency is required to make AI really work well without placing faith in a black box.

You'll still need your AI engineers that understand all of the models - compliance legislation changes all of the time - you'll have to keep tweaking things - but a handful of engineers vs hundreds/thousands of employees. You'll be able to pay the engineers very well indeed. :)

Which feeds even more into the rising inequality. Not sure that's a benefit here.

Using ChatGPT or many of these other engines that create the AI as an example, there is a learning curve to really understand how to properly use it. I would think writers or those proficient with it are best suited for that type of role, to me this is where you find the gap and fill it. It's definitely not easy and a change we have to adapt to but it can be done if one has the initiative.

My thing is more that using AI for generative content cost reductions means that person A isn't going to switch roles, but rather person B I already pay gets more/different responsibilities so that A can be cut completely. It's different from IT services going from on-prem to cloud where reduced demand in one area is matched with increased demand in another. To use the audiobook example, this is already pared down to a two person job. The one reading the book, and the production side of the person doing the recording work, adjustments, stitching, etc. Using ML-based TTS means that I just have the production person. I don't need a separate "AI whisperer". Same with ML-generated book covers/etc. The one in charge of the massaging the model inputs is already on my staff. So I'm not convinced that there's going to be as many new roles produced by this compared to something like low-code/no-code platforms (which themselves are already under threat by the idea of AI-generated low-code).

I'm not trying to make claim of a particular shape this disruption will take, but rather to point out "past performance is not an indicator of future results". And there are signs that past performance might not apply in the way we think here, especially with the current socioeconomic climate.

There is a learning curve, but I'm noticing some smart folks are having difficulty integrating with LLMs because of the sheer statistical nature of it all. Say I want to use GPT to invoke stock trades based on inputs I give it, and I have plugins that query stocks and make trades. Right now one of the problems engineers are having is getting the model to consistently invoke the "plugins" that would do the interaction. If the model decides not to invoke the plugin to query a stock and instead generates the data it is supposed to query instead... well, good luck to you trying to build something on top of that.
 
Just to go on a slightly different tack here, and with one other thought thrown in at the end:

Some years ago i was amazed to read a proposal to ‘automate science’. I forget the details, except that it did envisage replacement or reduction of scientists’ numbers as an advantage.

Science can be characterized as one of those ‘uniquely human’ and ‘creative’ activities. Part of the creativity arises from chance: even specialists in the same topic read the same papers but in a different order, and so different ideas emerge, one of which later proves fruitful. The more interesting path is where human personality comes into play: different scientists through different cognitive biases/preferences/whatever simply ’see’ different things in the same papers. The more unusual a particular person’s way of seeing the world, the more likely a truly original hypothesis.

Now, returning to the idea of automating science, it’s possible to see that all we need is a sufficiently powerful machine to take all available papers as input - note we are close to this with LLMs now, if not already there - then generate all possible hypotheses. Such a machine could also be constructed to determine which hypotheses are currently testable, and generate the designs, methods, and even send out blueprints for construction of the instruments and subsequently initiate collection of data.

Note that such a machine instantly removes the ‘human advantage’ from science, whether it be an unusual way of looking at the world or simple chance factors. The machine doesn’t need ‘intelligence’ or ‘sentience’ as we usually mean these things. It only needs to be sufficiently powerful. I think we’re pretty close to realizing this scenario.

Suppose this example generalizes to (most? All?) other things we do to ‘earn a living’ in our current economic structures: where does this leave us?

Machines don’t benefit, as the results are meaningless to ‘them’. I can certainly see that scientists might: all our questions, big and small, suddenly answered. The term ‘scientist‘ though would no longer be meaningful in the previous sense. Rather than training in science to do scientific work, interested citizens would train to understand the results, merely to satisfy their own curiosity. And so on.

But such scenarios, in which even highly ‘creative’ professionals find themselves out of work, cannot be sustained without drastic reorganization of societies. There’s no point to owning these means of production if no humans can afford to buy product (and indeed food and shelter and so on).
—-
And a recent random thought…some advocates of the ‘AI revolution’ assert that all our pressing human problems will be solved, such as climate change. However, as Naomi Klein recently points out, we already know the solutions. The problem isn’t ‘climate change’ (etc), it’s the disruption to vested interests with large sunk cost in infrastructure, and their shareholders and our ‘governments’ that lack leadership and so enable continuation of the status quo, and economic disruptions and displacements that impact everyone.

Suppose then that some sufficiently ‘wise’ AI - i assume such an AI has been cured of current LLMs’ hallucinogenic dependencies - identified the true barriers to climate change and other social problems and advocated that (say) oil production immediately scale down dramatically, governments be replaced, all citizens be paid a living wage, and so on.

What would the owners of this AI do at this point?

I have no particular position on whether AI is a threat, as i think that’s the wrong question. More to the point is how do we embed it within our societies - in which sense i think ‘who owns it‘ is a fundamental question! - and how do we address the social and economic revolutions that might follow, and are we likely to pay its ’pronouncements’ and ‘warnings’ any more attention than we do our scientists and others right now?

Moreover, we seem to be facing a number of existential crises right now. About the idea of AI as yet another i think ‘meh! What’s one more?’
 
Just to go on a slightly different tack here, and with one other thought thrown in at the end:

Some years ago i was amazed to read a proposal to ‘automate science’. I forget the details, except that it did envisage replacement or reduction of scientists’ numbers as an advantage.

Science can be characterized as one of those ‘uniquely human’ and ‘creative’ activities. Part of the creativity arises from chance: even specialists in the same topic read the same papers but in a different order, and so different ideas emerge, one of which later proves fruitful. The more interesting path is where human personality comes into play: different scientists through different cognitive biases/preferences/whatever simply ’see’ different things in the same papers. The more unusual a particular person’s way of seeing the world, the more likely a truly original hypothesis.

Now, returning to the idea of automating science, it’s possible to see that all we need is a sufficiently powerful machine to take all available papers as input - note we are close to this with LLMs now, if not already there - then generate all possible hypotheses. Such a machine could also be constructed to determine which hypotheses are currently testable, and generate the designs, methods, and even send out blueprints for construction of the instruments and subsequently initiate collection of data.

Note that such a machine instantly removes the ‘human advantage’ from science, whether it be an unusual way of looking at the world or simple chance factors. The machine doesn’t need ‘intelligence’ or ‘sentience’ as we usually mean these things. It only needs to be sufficiently powerful. I think we’re pretty close to realizing this scenario.

Suppose this example generalizes to (most? All?) other things we do to ‘earn a living’ in our current economic structures: where does this leave us?

Machines don’t benefit, as the results are meaningless to ‘them’. I can certainly see that scientists might: all our questions, big and small, suddenly answered. The term ‘scientist‘ though would no longer be meaningful in the previous sense. Rather than training in science to do scientific work, interested citizens would train to understand the results, merely to satisfy their own curiosity. And so on.

But such scenarios, in which even highly ‘creative’ professionals find themselves out of work, cannot be sustained without drastic reorganization of societies. There’s no point to owning these means of production if no humans can afford to buy product (and indeed food and shelter and so on).
—-
And a recent random thought…some advocates of the ‘AI revolution’ assert that all our pressing human problems will be solved, such as climate change. However, as Naomi Klein recently points out, we already know the solutions. The problem isn’t ‘climate change’ (etc), it’s the disruption to vested interests with large sunk cost in infrastructure, and their shareholders and our ‘governments’ that lack leadership and so enable continuation of the status quo, and economic disruptions and displacements that impact everyone.

Suppose then that some sufficiently ‘wise’ AI - i assume such an AI has been cured of current LLMs’ hallucinogenic dependencies - identified the true barriers to climate change and other social problems and advocated that (say) oil production immediately scale down dramatically, governments be replaced, all citizens be paid a living wage, and so on.

What would the owners of this AI do at this point?

I have no particular position on whether AI is a threat, as i think that’s the wrong question. More to the point is how do we embed it within our societies - in which sense i think ‘who owns it‘ is a fundamental question! - and how do we address the social and economic revolutions that might follow, and are we likely to pay its ’pronouncements’ and ‘warnings’ any more attention than we do our scientists and others right now?

Moreover, we seem to be facing a number of existential crises right now. About the idea of AI as yet another i think ‘meh! What’s one more?’
We’re not as close as you might think to that future. AI can or is close to being able to automate some of the work that goes into doing science but it is not close to automating science itself. Fundamentally AI doesn’t understand what it is doing or why, it doesn’t have first principles and then inquire as to what’s next. It has no desire to even do so. It has a limited ability to generate solutions it hasn’t seen before but simultaneously it can be tricked into generating false solutions based on information outside of its training regime and doesn’t understand when it is being manipulated. It can’t continuously learn new information - especially not in the long term.

Now some of these issues are solvable. Others less so. This isn’t to diminish the potential of what even current AI models are capable of but we aren’t close to achieving general AI yet and the point of the Stanford study I posted earlier is that it won’t sneak up on us. If we start getting close, we’ll know.

Other aspects of the points above will be harder to reach - the idea of will for instance. An AI that wants things that is able to self reflect and understand what it is doing and why. That part is necessary for performing science - it is in fact the crucial element of science. So no, you can’t just “automate science” without at least limited sentience and thus a more general AI than we have today. What we have today is machine learning. It’s basically an incredibly huge logistic regression over human knowledge, powered by humans (tagging of information done by poor manual labor in large part in places like Africa and Asia who are currently fighting to unionize). It’s quite amazing what you can do with that but people need to have a more nuanced and realistic understanding of what’s actually happening.

The AI apocalypse ain’t here … yet. But that doesn’t mean machine learning can’t be hugely disruptive to human industry and society all by itself.
 
^ I think i covered most of what you say in your first paragraph when i stated the "results are meaningless" to these AI. I should add i don‘t imagine we are close or not close to this future, i only mean to point out this one scenario as a thought-experiment for considering impacts on other types of creative industry.

But, to pursue that scenario a little more I don’t see it as a problem of general AI whereas I guess you do? The issues I see are that next generation LLMs and other generative/adversarial/whatever networks need to be able to represent the scientific literature in a sufficiently coherent mathematical/logical form to make cogent hypothesis generation possible (but given sufficient computing power, generating a high proportion of rubbish hypotheses might not matter!), and the matter of ‘sufficient computing power’ itself. In terms of the former, the requirement that scientific writing be precise and careful makes the task a little easier. In terms of the latter, I agree we aren’t there yet, and I have no sense of how long it might take to reach it as we‘re not even sure what that level is. Thus, it might be months…or decades…or…

I fear we must agree to disagree on the necessity of “will” for performing science. Why is it necessary? What do you mean by it? BTW, this might arise from you and I being in different scientific fields? (Mine is social science).

I note a widespread tendency to privilege ourselves as ‘special’ in some way. Let me hasten to add that we undoubtedly are, but whether we are more special than what we do, and whether what we do is reproducible by other means is the question. ‘Yes it is’ would be the verdict of technology to this point, but let’s not make that an argument about whether what we do in science is also reproducible by other means.

The appeal to notions such as “will” and “sentience” is to escape into areas so diffuse as to defeat further argument. Wittgenstein would ask ‘to what are we pointing?’.

But let me surmise that you bring in “will” to mean that humans scientific pursuits are goal directed, driven by needs we recognise as a result of a superbly big brain with multi-sensory experience and capacity for emotion navigating a large, complex and ambiguous world. Good point, except I covered this by saying the machine in my thought experiment would ”generate all possible hypotheses”. No goal-setting needed!

By the way, apologies if I’ve set up a straw man here and this is not what you meant at all!! That’s the risk once we get into philosophy I think :oops:

Otherwise though I agree with what you write above (I need however to search back to your post concerning the Stanford study). The statistician in me especially likes your characterization of LLMs as a “huge logistic regression over human knowledge”. Well put.
 
We've been on this rodeo before. In 1492, the monk Johannes Trithemius had some things to say about the printing press, in his essay "In Praise of Scribes".

"The word written on parchment will last a thousand years. The most you can expect a book of paper to survive is two hundred years."

Parchment is made of animal skin, while paper is made from cellulose derived from plant fibers. Modern paper does degrade because it's made from wood pulp, but in Trithemius's time, paper was made from old rags, a material that remains stable over hundreds of years, as the surviving copies of the Gutenberg Bible show.

"Printed books will never be the equivalent of handwritten codices, especially since printed books are often deficient in spelling and appearance."

His diatribe was disseminated by printing press, not hand-copied by monks. I'm sure our AI overlords will diligently record all of the predictions from today to share with our decedents, so that they can see how silly we were.
You know if the human race had it’s act together, automation and AI might set us free, but it‘s just as likely to destroy us, This is not a species that believes in sharing the wealth, but hoarding it. And it’s not an argument against technological advancement, just an argument that Capitalism will not serve the majority into the future, and until the human species can learn to share for the common good and be like minded in where we need to head, I do not have high hopes for us. :unsure:
 
^ I think i covered most of what you say in your first paragraph when i stated the "results are meaningless" to these AI. I should add i don‘t imagine we are close or not close to this future, i only mean to point out this one scenario as a thought-experiment for considering impacts on other types of creative industry.

But, to pursue that scenario a little more I don’t see it as a problem of general AI whereas I guess you do? The issues I see are that next generation LLMs and other generative/adversarial/whatever networks need to be able to represent the scientific literature in a sufficiently coherent mathematical/logical form to make cogent hypothesis generation possible (but given sufficient computing power, generating a high proportion of rubbish hypotheses might not matter!), and the matter of ‘sufficient computing power’ itself. In terms of the former, the requirement that scientific writing be precise and careful makes the task a little easier. In terms of the latter, I agree we aren’t there yet, and I have no sense of how long it might take to reach it as we‘re not even sure what that level is. Thus, it might be months…or decades…or…

I fear we must agree to disagree on the necessity of “will” for performing science. Why is it necessary? What do you mean by it? BTW, this might arise from you and I being in different scientific fields? (Mine is social science).

I note a widespread tendency to privilege ourselves as ‘special’ in some way. Let me hasten to add that we undoubtedly are, but whether we are more special than what we do, and whether what we do is reproducible by other means is the question. ‘Yes it is’ would be the verdict of technology to this point, but let’s not make that an argument about whether what we do in science is also reproducible by other means.

The appeal to notions such as “will” and “sentience” is to escape into areas so diffuse as to defeat further argument. Wittgenstein would ask ‘to what are we pointing?’.

But let me surmise that you bring in “will” to mean that humans scientific pursuits are goal directed, driven by needs we recognise as a result of a superbly big brain with multi-sensory experience and capacity for emotion navigating a large, complex and ambiguous world. Good point, except I covered this by saying the machine in my thought experiment would ”generate all possible hypotheses”. No goal-setting needed!

By the way, apologies if I’ve set up a straw man here and this is not what you meant at all!! That’s the risk once we get into philosophy I think :oops:

Otherwise though I agree with what you write above (I need however to search back to your post concerning the Stanford study). The statistician in me especially likes your characterization of LLMs as a “huge logistic regression over human knowledge”. Well put.
The Stanford study I mentioned is in this post here:


Basically LLMs are no different than the other deep neural networks and any sense of “emergent” capabilities are a function of how the metrics used to asses their capabilities rather than something intrinsic. Measured correctly they show linear growth in aptitude with model complexity much like other neural networks. So there’s no sudden shift in quality or an emergent behavior from these models, rather they are progressing exactly as you would expect with predictable behavior. So if LLMs or other such neural networks lead to a “general AI” (maybe) we’ll see it coming. It won’t just appear.

And I suppose it depends on what you mean by “doing science”? Basically science is an act of inquisitiveness. LLMs aren’t inquisitive. That’s what I mean by “will”. They can’t generate hypotheses, they can’t test hypotheses, they can’t generate new hypotheses based on those tests. That’s simply not possible. With very careful prompting they can generate tools or be tools to test hypotheses but that’s the extent.

To flesh out the above in more detail: with science ultimately it is the ability and will to ask questions that is the most important part (1). The next important part is figuring how to ask the right question (2), then figuring how to answer that question (3), then getting the answer (4), then generating a new question (5). LLMs and neural networks in general are so far really only capable of aiding in step 4 and a little bit of 3. They might one day be able to fully help with 3 and maaaaaybe one day 2 … I’d have to think about that. But 1 and 5 require sentience. It requires an understanding of what you’re doing and why. Truthfully so does 2.

Even take art, something like midjourney can generate fantastical pieces of art with very careful prodding from a human on the other end and based on the tagging of vast amounts of art by humans. However it doesn’t generate art to please itself. It doesn’t know or understand what it is doing. Ultimately a human is still in fact in charge of the final look and feel by having the AI generate and regenerate the images based on prompts and further refinement of that prompt. When midjourney starts painting on its spare time to satisfy its own need to create art, then we’ll be in real trouble.

Thus to automate out humans entirely is simply not possible with the current underlying technology. It doesn’t strive to understand things, it’s trained on things to give a simulation of understanding. And as mentioned above this applies to more than just science - science isn’t special here. Again, this is not to dismiss the disruptive nature of deep learning frameworks to artists or scientists or any other field. But really AI “generated”should be seen as AI “aided”. A human generated the idea, the AI fleshes it out based on what it’s seen humans do in a similar circumstances. It’s a probabilistic machine, what’s the most likely correct next thing to write/draw/etc… That’s still incredibly powerful, but isn’t general AI and isn’t capable of fully automated, independent action.
 
Last edited:
The Stanford study I mentioned is in this post here:


Basically LLMs are no different than the other deep neural networks and any sense of “emergent” capabilities are a function of how the metrics used to asses their capabilities rather than something intrinsic. Measured correctly they show linear growth in aptitude with model complexity much like other neural networks. So there’s no sudden shift in quality or an emergent behavior from these models, rather they are progressing exactly as you would expect with predictable behavior. So if LLMs or other such neural networks lead to a “general AI” (maybe) we’ll see it coming. It won’t just appear.

And I suppose it depends on what you mean by “doing science”? Basically science is an act of inquisitiveness. LLMs aren’t inquisitive. That’s what I mean by “will”. They can’t generate hypotheses, they can’t test hypotheses, they can’t generate new hypotheses based on those tests. That’s simply not possible. With very careful prompting they can generate tools or be tools to test hypotheses but that’s the extent.

To flesh out the above in more detail: with science ultimately it is the ability and will to ask questions that is the most important part (1). The next important part is figuring how to ask the right question (2), then figuring how to answer that question (3), then getting the answer (4), then generating a new question (5). LLMs and neural networks in general are so far really only capable of aiding in step 4 and a little bit of 3. They might one day be able to fully help with 3 and maaaaaybe one day 2 … I’d have to think about that. But 1 and 5 require sentience. It requires an understanding of what you’re doing and why. Truthfully so does 2.

Even take art, something like midjourney can generate fantastical pieces of art with very careful prodding from a human on the other end and based on the tagging of vast amounts of art by humans. However it doesn’t generate art to please itself. It doesn’t know or understand what it is doing. Ultimately a human is still in fact in charge of the final look and feel by having the AI generate and regenerate the images based on prompts and further refinement of that prompt. When midjourney starts painting on its spare time to satisfy its own need to create art, then we’ll be in real trouble.

Thus to automate out humans entirely is simply not possible with the current underlying technology. It doesn’t strive to understand things, it’s trained on things to give a simulation of understanding. And as mentioned above this applies to more than just science - science isn’t special here. Again, this is not to dismiss the disruptive nature of deep learning frameworks to artists or scientists or any other field. But really AI “generated”should be seen as AI “aided”. A human generated the idea, the AI fleshes it out based on what it’s seen humans do in a similar circumstances. It’s a probabilistic machine, what’s the most likely correct next thing to write/draw/etc… That’s still incredibly powerful, but isn’t general AI and isn’t capable of fully automated, independent action.
I’ll say we are in the baby steps of AI. Tough to predict at this point where we will end up, Read some good SciFi. :D
I am pretty confident in saying that as long as we rely on the cumulation of large amounts of wealth at the expense of fellow human beings as our measure of success, AI will just provide another avenue for smaller groups of human beings to achieve that at the expense of others. I see socialism as the future, but that will require a fundamental change in our perspective/nature, and it will probably require some cataclysmic event to get us on the same page, that is if we survive the event to benefit from it. :unsure:
 
Note that such a machine instantly removes the ‘human advantage’ from science, whether it be an unusual way of looking at the world or simple chance factors. The machine doesn’t need ‘intelligence’ or ‘sentience’ as we usually mean these things. It only needs to be sufficiently powerful. I think we’re pretty close to realizing this scenario.
You are missing a big part, though.

That's odd. I wonder what causes that.

Humans are proactive and curious. Computers are reactive. Science has to start somewhere, and the most sophisticated LLM in the world is still only a reactive tool.

Why do we create and explore? I think it has something to do with sex. And probably hunger, and maybe the need to take a pee. These are things (the human experience) that are difficult to simulate, in a meaningful way.
 
I saw the iOS ChatGPT app on the store and almost downloaded it, but got scared :LOL:.

My wife is an HR Director and she played with one of the AIs, having it write a job description. She was impressed, it took a fifth of the normal time for that task and the work wasn't bad.
 
You are missing a big part, though.

That's odd. I wonder what causes that.

Humans are proactive and curious. Computers are reactive. Science has to start somewhere, and the most sophisticated LLM in the world is still only a reactive tool.

Why do we create and explore? I think it has something to do with sex. And probably hunger, and maybe the need to take a pee. These are things (the human experience) that are difficult to simulate, in a meaningful way.
I agree but…

That is still to talk of an AI as if it has to be curious. It doesn’t, any more than a spreadsheet needs to understand a balance sheet.

To be clear, the specific scenario i started with - the proposal that scientists could be replaced - is not one i advocate. As a scientist myself i find the idea appalling. But that’s not the thread topic here.

Perhaps there’s a divide between us in this thread in how far we think AI - i mean narrow AI - can go in the near to medium future? And there perhaps is a little reactivity to the more dystopian scenarios out there - Terminator/Skynet anyone? Whereas what we have on this board is an articulate, intelligent, tech-savvy and all sorts of other-savvy group capable of discussing the nuances of LLMs’ narrow-AI impact on … well, just to name a few personal areas of interest - professional roles, the future scope and definition of work, and socioeconomic impacts. But there are no doubt others.

If LLMs and other narrow AI have only little impact, becoming just the equivalent of another office tool, then there’s nothing to see here.

However, i‘m wary of inductive arguments of the type "we’ve experienced tech disruption before, sure it’s bad for a few, but hey new jobs, yada yada yada". I don’t think we should resile from a searching examination of the possible scenarios fuelled - let us say - by our capacities for sex, hunger and to take a pee!
 
I saw the iOS ChatGPT app on the store and almost downloaded it, but got scared :LOL:.

My wife is an HR Director and she played with one of the AIs, having it write a job description. She was impressed, it took a fifth of the normal time for that task and the work wasn't bad.
I experimented with ChatGPT late last year. At first impressed, I thought i saw on my screen a quite good, quite useful research assistant. And the prose was ‘good’. I was a bit excited about the extra leverage in my work…

Alas, its further replies turned out one-dimensional and repetitive. Moreover, i soon encountered its hallucinations too.

That shows my use case was the wrong one. Other use cases though? Yeah, can be impressive…
 
Perhaps there’s a divide between us in this thread in how far we think AI - i mean narrow AI - can go in the near to medium future? And there perhaps is a little reactivity to the more dystopian scenarios out there - Terminator/Skynet anyone? Whereas what we have on this board is an articulate, intelligent, tech-savvy and all sorts of other-savvy group capable of discussing the nuances of LLMs’ narrow-AI impact on … well, just to name a few personal areas of interest - professional roles, the future scope and definition of work, and socioeconomic impacts. But there are no doubt others.

I think it partly depends on what you think narrow AI / ML can do today.

But you can bet that Google and Microsoft are currently losing their minds trying to figure out the edges of LLMs like ChatGPT and Bard at the moment and how to take advantage of it.

That shows my use case was the wrong one. Other use cases though? Yeah, can be impressive…

Yeah, in the immediate term, the thing it feels most adequate at are things that are part of my job, but not my job description. So having to write up small bits of boilerplate e-mail for example. The stuff that interests me in the short term is automating things where we have people because it's about bridging the free-form and the structured.

Can I use a LLM to auto-create useful structured data from support e-mails, and help detect missing data and ask for it? Can I use a LLM to ask free-form questions about structured data, and then output the most interesting angles as a Power BI dashboard? I honestly think this is probably going to leave a larger impact as the next generation of "voice assistant" that can make better guesses about what you want without relying on static commands to do so.
 
You are missing a big part, though.

That's odd. I wonder what causes that.

Humans are proactive and curious. Computers are reactive. Science has to start somewhere, and the most sophisticated LLM in the world is still only a reactive tool.

Why do we create and explore? I think it has something to do with sex. And probably hunger, and maybe the need to take a pee. These are things (the human experience) that are difficult to simulate, in a meaningful way.
Are we not just biological computers? If so why could a realistic AI human-like personality not be crated when technology exists, with those elements of curiosity?

I’ll step back to lights on/off standard. As human beings, our lights are on, we have consciousnes and self awareness. You could create something very human like, that reacts to its environment just like we do, and interacts with other human beings in a manner indistinguishable from a human being, but its lights are off, it may have the sensors that give it input, but it’s still just running an elaborate program. could this be us?

This leads me to 2 elements, what is consciousness exactly, it’s origin, how it works, and do you need an external element something like a soul to make it happen, versus just a program that is running?

Anyone tried this?
 
Last edited:
I see socialism as the future, but that will require a fundamental change in our perspective/nature, and it will probably require some cataclysmic event to get us on the same page

soylentgreen.jpg
 
This leads me to 2 elements, what is consciousness exactly, it’s origin, how it works, and do you need an external element something like a soul to make it happen, versus just a program that is running?

The nature of consciousness is a manifold question, but the evidence that I have seen suggests that it is more affective than effective. The brain functions, at the observable level, almost exactly like a type of computer. It performs analysis and calculation, fitting to its application (e.g., the brain of a duck performs flight control, ground or water navigation, communication with other ducks, food acquisition, propagation and predator avoidance), and it is not evident that consciousness plays any kind of direct role relative to mental function.

I posit that the conscious state of a duck, or a cat, or barn spider is not different from human consciousness in any meaningful way. Those other critters are merely unable to articulate how they feel about feelings and awareness to us in a common language. My contention is that consciousness/soul derives from the basic survival instinct that has driven life forms to persist, to multiply and to compete. Because, why else would we construct fairy tales about life after death? We want our consciousness to not end abruptly but continue, in some recognizable cohesion, indefinitely. (Well, some of us do – some of us would welcome oblivion after this crapfest called life.)

If my position is valid, it raises questions. How would we go about instantiating consciousness in a machine? Would we do it as research, to discover whether the machine behaves more like us/animals with a spectator consciousness vs an interactive consciousness, and is it ethical to create life-equivalents just for the sake of finding the me-boson? In fact, is it even ethical to create consciousness machines just for the sake of finding out whether it can be done?
 
Last edited:
Back
Top