I have been sitting on the sidelines for the past year as the frenzy surrounding the ELE that is ChatGPT and its ever-growing coterie of clones and similar products.
There are clearly some great uses for the technology. One of my developers (we make technical training for IT) built a little tool to take the outline for a training course, and spit out description text, and three summaries (for marketing purposes, 25, 50 and 125 words). It does a good job, in about 7 seconds the tool does the query, and populates the fields. The cost (yes, we pay for the API access) is about a tenth of a penny.
Could I do better?
Absolutely.
But, it takes me about 10 minutes to read and grok the outline, to craft the description, and the three fields. At my pay grade that is 1/6 of an hour, probably $20 (fully burdened rate including benefits). A product marketing person, it may cost $10 (cheaper salary and costs).
This is a slam dunk for us, but it is worrying for many reasons.
First – Solid Jobs go poof
The above example is just a tiny taste of the types of jobs that are at risk. A LOT of solid middle class jobs are related to crafting marketing copy, doing simple support tasks, and doing entry level analysis.
Those jobs are 100% at risk.
For simple tasks like this where contextual interpretation and summarizing is a sweet spot for the burgeoning AI chatbots. Yet that is the hook that gets the chatbot in the door. Already we are hearing reports of lawyers using ChatGPT in their research, leading to hilarity when the listed cases cited turned out to be fabricated out of whole cloth.
I am also hearing that big investment banks are beginning to use ChatGPT to do low level analysis. Tasks that are commonly the fodder of entry level bankers are being farmed out to ChatGPT. And from the writing of Matt Levine at Bloomberg, the chatbot does a good enough job (comparable to the entry level).
This is merely the opening salvo into the future, but clearly this trickle will become a roaring torrent of good career building entry level jobs just disappearing into the void.
Management’s response
While I can’t say that I know this for certain, but it seems obvious that the management and executive class are licking their chops. This technology, especially as it matures, is going to give them the ability to massively trim expenses due to labor.
This is not unexpected, as much of modern capitalism has been reduced to a battle between capital and labor, with the last 40+ years showing an accelerating decline in the power of labor in the relationship, and this will likely be an accelerator of this trend.
No, they are positively giddy at the opportunity to trim workforces, replacing people with these machines.
This is already beginning. I am seeing mid-level marketing people being cut, replaced with ChatGPT, and a department of 10 writers trimmed to two senior ones who merely do editing and clean-up of the ChatGPT created text. A cut of 80%. Yikes.
Hallucinations
The tendency of these chatbots to fabricate answers is well known, and is being studied widely. That is how the lawyer referenced above was tripped up, as the chatbot was making up answers to queries.
This has led to speculation that there is an emergent role, a profession if you will that will focus on “prompt engineering”. That is people who will be skilled at the crafting of queries to get a) good results, and b) relevant responses.
Clearly, the current generative AI (both the textual ones of chatbots, and the visual ones that turn text into images) require some careful grooming of the inputs to get the best results.
While that is true today, it doesn’t take a swami to predict that as these chatbots and other tools become more capable, the need to have a skillful prompt to get “good enough” results will be curtailed.
If the official role of “prompt engineer” is still a thing in 5 years, I will eat the Indiana Jones hat that I use when hiking.
If I had kids who were about to head to university, I really have no idea what I would recommend.
It’s not all bad
While there is a large amount of investment happening, and it seems that some of the worst of Silicon Valley tech bros are salivating at how this is going to take over the world, it is not a fait accompli. The compute footprint, and the complexity of a training set are impediments to entry.
Yes, currently it is about a tenth of a penny per query for API level access, it is estimated that the actual cost (compute cost) for a typical query is about $0.36. That might not seem like a lot, but when you total it up, it gets very pricey. Literally hundreds of thousand dollars per day to keep the lights on.
Sure, this is going to come down in price, but as the models become more sophisticated, the compute requirements will grow, so I would expect it to be a steady state cost wise.
In fact, it would not surprise me to find that Microsoft is subsidizing a significant portion of the cost of running ChatGPT on their Azure infrastructure, an in-kind contribution if you will.
But one day, the bill will come due.
Next installments
This will be a series of posts, exploring the rise. Much of what I have read has been from the “enthusiast” viewpoint, turning a blind eye towards the societal impact, particularly on the socio-economic fallout.
I will explore how the rise of generative AI will begin to hollow out the pipeline of talent, particularly in Product and Marketing, before moving to the implications to the broader middle class.
The Future
I am reminded of a conversation I had in an airport waiting area 20 years ago. A fellow traveler and I were talking about his daughter’s choices in university, and he asked my recommendation. Unreservedly, I said that learning Chinese would probably be the best thing he could recommend. Regardless of where in business she would end up, speaking the language of a major trading partner would open doors.
Today, I would be hard pressed to have an answer. I am just glad that I am nearing the end of my career, and that before the negative externalities come home to roost, I will be retired (or dead).
What a cheery thought.
Coda
The integrated AI to create summaries, and the like has hit the limit of my free account. To “turn” it back on, it will require my subscribing to the AI package from WordPress/Jetpack. Alas that will cost me $8.34 a month paid annually.
Alas, I will just go back to crafting the summaries by hand. I am a cheap bastard!
Ugh; not more AI musings (I just finished editing some copy that had a significant section predicting the effects of AI on jobs/employment). One question I have is this: if AI takes over a lot of entry-level jobs, who will fill the higher-level jobs that ostensibly require developing and honing the skills required at the entry level? Given how AI “hallucinates” (that’s a terrible term for the phenomenon; it’s simple fabrication, as you stated) information already, and its own work being iteratively fed back into it for “advanced” training, that problem is just going to get worse. And without people who have any inkling that something might be wrong (let alone what, and how to go about testing that hypothesis) … well, you get the idea.
I’m not sanguine about what’s coming, but it seems to me the AI takeover might be slower and more problematic than most analysts I’ve read seem to think.
Alas, I fear that the entry level jobs market has been thoroughly fucked already. Corporate America has completely shifted their hiring to focus on people who are already capable of doing the job, and are unwilling to train up a new hire (fresh out) rather than hiring someone who can hit the ground running, with the skills they seek/need already.
Furthermore, I expect this disruption to become even more severe as management eyes profitability and the desirability to “eat their seed corn” instead of working on the staff of tomorrow. This has been an issue for some time now already, and the generative AI will give them more ammunition to avoid the “risk” of hiring and training people.
When you hear HR people bitch and moan about people they paid to take training leaving for better opportunities, you know it is already an established infection. Perhaps if your company wasn’t shitty to employees, they wouldn’t be seeking to leave once they gain competencies that are in demand. But years of layoffs, boom/bust cycles, ramping up fast and laying off faster, there might be some trust between employer and employee.
I just don’t see that happening. The disease that started amongst tech is a spreading across the economy, and unless the workers begin to organize, they will continue to lose the battle.
Thanks for reading, and sorry to trip your “musings” meter around AI.
Also, this by Dave Karpf is worth a read: https://davekarpf.substack.com/p/bullet-points-a-couple-predictions