This is a topic snowballing all over the internet so I wanted to start a thread dedicated to it.
Sounds like writers are getting pretty worried, understandable considering the strike.
Besides, once the genie is out the bottle, it won't go back inside, no matter how many wishes are cast. Some jobs become obsolete, but new opportunities usually arise, many of them completely unforeseen by even the most respected futurists of the day.
I have to wonder if we're all being fooled by nothing more than a mechanical turk.
I’ll ask, who in the business world, corpo-land sees the danger? Apparently some do as warnings have been issued, but there is the profit motive blinders, The corpotacracy will do what they do best, shake things up for the perceived status quo for profit advantage, until if and when governments reigns them in (in the US not likely under GOP rule/effluence) or an angry mob of unemployed break down their doors and burn the place down.This is a topic snowballing all over the internet so I wanted to start a thread dedicated to it.
Sounds like writers are getting pretty worried, understandable considering the strike.
Agreed, in the end it's just another form of automation. Imagine an Amazon or Fed Ex facility without it, you would probably need thousands of humans doing all the manual sorting and such. These things will also always need some form of human intervention, to this day computers need techs, programmers, etc. so it's just a matter of finding where you can fit in.i do feel for the people who will be displaced. I may be one of theM. On the other hand, thats what new technology always does. In the ‘70’s and ’80’s, people were rightfully warning about all the jobs that would be lost to computers.
The typical counter to concerns about technical advancement is oh new jobs will pop up to replace the old, so you peon, don’t worry that your current well paying job just evaporated to create another millionaire/billionaire, you find something at half the wages to carry you through.Whenever there's been a new disruptive technology introduced, the people of that era would have similar concerns. It happened with the telegraph, the radio, moving pictures, automobiles, robotics, personal computers, the steam engine. It wouldn't surprise me if this happened when humanity first discovered the wheel. With AI, culturally we have decades of subconscious concerns of Skynet taking over, making humans obsolete, perhaps starting a nuclear war in our stead. Yet, despite these disruptive technologies, we somehow carry on, as humans always do. Besides, once the genie is out the bottle, it won't go back inside, no matter how many wishes are cast. Some jobs become obsolete, but new opportunities usually arise, many of them completely unforeseen by even the most respected futurists of the day.
Concerning modern AI, as we approach another alleged solution to the Fermi Paradox, I have to wonder if we're all being fooled by nothing more than a mechanical turk.
The problem is us, human beings, capitalism, and human greed. My position is that capitalism without extreme regulations and caps on wealth, minus the view for civilization as a whole will not be able to carry us into the future. Too much ME and not enough WE in our economic calculations.I think the difference here is the sort of work that some folks are looking to replace with things like ChatGPT and Stable Diffusion is that it is inherently dependent on continued output by people to some extent. i.e. I can’t ask ChatGPT about research it’s never been trained on or software techniques it’s never seen on Stack Overflow. I can’t ask Stable Diffusion to mimic an artist not in its dataset. And new training data needs to be tagged by a bunch of low wage workers in the global south, apparently. So it’s not exactly clear to me what companies expect to get by replacing writers and artists in the long term, other than a race to the bottom in both price and quality.
But I don’t think we fully know what this all means yet, for sure. My own issues with ML tends to be the fact that we don’t seem to have as good an understanding of these black boxes we have been building as we need in order to make reasonable decisions as to where to apply it or how to improve it other than just throwing more data at it. Or the fact that we’re building black boxes that spit out the same biases we’ve created in our datasets, which to deal with requires a great deal of low wage work.
So are we producing white collar work like PCs tended to? Or are we producing unskilled labor work? So far it seems more like the latter, as very few people are involved with designing models, or the training frameworks, and instead tagging data is where the demand is for now.
Progress is not always going to propel us forward, and ultimately it is a choice of society how we apply this new technology, or what we do to mitigate the harms it can bring.
There is a veil of snake oil here it feels like. Has been for a few years now, but ChatGPT and Stable Diffusion make it easy to leap the gap and imagine a future we’re not at yet. Partly because of how convincing some of the output is. To the point that you have folks arguing in favor of ChatGPT being able to “reason” based on that output. If we’re being fooled, we are doing it to ourselves.
And then you realize how much of the output is still akin to a wild acid trip: https://privateisland.tv/project#synthetic-summer
Right now being a Captain flying a wide body at Delta is paying $500k per year. Thirty years ago, FedEx floated the idea of automated piloting. If it can happen, it will, along with the promise, you’ll find another good paying job… somewhere.Agreed, in the end it's just another form of automation. Imagine an Amazon or Fed Ex facility without it, you would probably need thousands of humans doing all the manual sorting and such. These things will also always need some form of human intervention, to this day computers need techs, programmers, etc. so it's just a matter of finding where you can fit in.
Say what you want about capitalism, but without it we wouldn’t know where to find the beginning of sentences.The problem is us, human beings, capitalism, and human greed. My position is that capitalism without extreme regulations and caps on wealth, minus the view for civilization as a whole will not be able to carry us into the future. Too much ME and not enough WE in our economic calculations.
We've been on this rodeo before. In 1492, the monk Johannes Trithemius had some things to say about the printing press, in his essay "In Praise of Scribes".
"The word written on parchment will last a thousand years. The most you can expect a book of paper to survive is two hundred years."
Parchment is made of animal skin, while paper is made from cellulose derived from plant fibers. Modern paper does degrade because it's made from wood pulp, but in Trithemius's time, paper was made from old rags, a material that remains stable over hundreds of years, as the surviving copies of the Gutenberg Bible show.
"Printed books will never be the equivalent of handwritten codices, especially since printed books are often deficient in spelling and appearance."
His diatribe was disseminated by printing press, not hand-copied by monks. I'm sure our AI overlords will diligently record all of the predictions from today to share with our decedents, so that they can see how silly we were.
It'll be a rich hipster thing. They'll pick you out of a catalog that's sitting right next to their vinyl collection, cassettes, and Betamax tapes.There’s always a place for hand made things made by real craftspeople. And I assume when AI comes for my job (it’s happening), there will be a place for fancy bespoke lawyering for rich people.
The biggest danger of AI is unemployment. Banks and Insurance Companies? All math. All of the back end clerks and adjudicators can be replaced with AI pretty easily. Accountants. Doctors and mechanics (as AI does problem analysis very well).
You either get on the train as it moves or face the same obsolescence the antiquated hardware you're managing does. I made that decision mid-way through my career when I saw it going that way and it paid off well for me.
The problem is us, human beings, capitalism, and human greed. My position is that capitalism without extreme regulations and caps on wealth, minus the view for civilization as a whole will not be able to carry us into the future. Too much ME and not enough WE in our economic calculations.
Using ChatGPT or many of these other engines that create the AI as an example, there is a learning curve to really understand how to properly use it. I would think writers or those proficient with it are best suited for that type of role, to me this is where you find the gap and fill it. It's definitely not easy and a change we have to adapt to but it can be done if one has the initiative.Yeah, I agree there's definitely a need to be flexible. But keep in mind your example relies on similar skills. Automation tends to displace one skill with a different skill, which may or may not be similar enough for everyone to make the jump, and folks can find themselves displaced into the unskilled labor pool if there isn't enough demand for the new skilled labor jobs.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.