Apparently the first time JRR Tolkien used a tape recorder, he (semi-seriously) recorded himself praying the Lord’s Prayer in Gothic to exorcise any demons that might be inhabiting a machine that is capable of speaking on its own. He then proceeded to make some quite wonderful recordings of readings from The Hobbit and The Lord of the Rings. Better safe than sorry!
There’s been a lot of talk about AI over the past year because of recent impressive progress in the development of text- and image-generating AIs. A lot has been related to long-term doom stuff, but since I have no control over potentially inevitable future doom, that doesn’t seem worth bothering about.1 There’s also been discussion about how current AI actually works since nobody really knows (weird glitch-prompts related to archetypes??), which is more interesting but mostly over my head. Anyway, in the shorter term I think it would be interesting to consider what effects AI will have over the next decade or two, since that will actually be relevant to decisions I might be making right now. Given how fast things changed over the course of the past year, I’m sure they’ll continue to change fast. Any predictions will inevitably be wrong, but I think it would be interesting to make some broad ones and look back later to see how they fared.
I’ve recently heard a couple people express (semi-serious) concern that using ChatGPT will make people forget how to think. Although the bot’s eagerness to provide ready-served and comprehensive answers could perhaps result in a lapse of critical thinking, I feel like we tend not appreciate how much we’ve already been offloading our thinking to AI in some contexts.
Often trying to engage with people on Instagram has me feeling like this:
This is frustrating; I enjoy sharing my photos and people appreciate seeing them,2 but there’s this “robot” in between complicating things.
Every social media platform uses AI to some extent in its filtering and display. The simplest alternative would be just showing the most recent posts from accounts you follow, and relying on manual reporting and investigation for inappropriate. If they want to keep users maximally engaged, and keep up with the scale of guideline breaches, they need a more dynamic automated response. I don’t know all the technicalities about when a platform is using a complex algorithm vs. AI, but suffice to say that AI has been in use here for a while and it’s part of the strategy that these platforms use to do this.
People want to use Instagram to see cool things and keep up with their friends and acquaintences. The owners of Instagram want people to use it to make money from ads. These are conflicting priorities. The best compromise between these is apparently that you scroll through photos and see an ad every 3 posts. They make it take extra effort to view and add comments, because that’s effort that doesn’t go towards seeing more ads. The reach of posts is reduced to encourage content producers to use stories, because the transient nature of stories incentivises users to keep coming back every day, and multiple times a day, in order to keep up with the accounts they follow (because they can’t trust the newsfeed to show everything reliably). You’re strongly encouraged or even forced to use platforms in an app instead of a browser, because it’s illegal for ad blockers to work on apps. It’s easier to go along with all of these things, but they incentivise mindless scrolling over real interaction.3
Instagram is frustrating to use for all these reasons, but it’s also where all the people and cool photography and art are. On other platforms like Discord, twitter, facebook etc.4 you get more direct engagement with other people, but that also requires more conscious effort and filtering to have a good experience with them. As a result those platforms are more polarizing and less popular. We’d rather let AI do more of the mental effort for us.
I do think AI will change social media even more though. Bots are already a huge pain for any public internet space, but right now they’re pretty predictable and easy to identify. If bots are able to interact more dynamically and realistically as humans, I could imagine it making public internet spaces so annoying to use that it’s no longer worth it. If everyone is forced to retreat into private spaces, that would be a huge loss for the public flow of ideas that makes up the internet at its best.
Speaking of social engagement…
AI relationships will be a thing. They already are. Interacting with real people takes effort and skill and is often discouraging; it’s easier not to if you can get away with it (and it’s pretty easy to get away with it). Looking at any relevant statistic will tell you that a lot of people are struggling with anything related to doing romantic relationships, and relational intimacy was maybe the one major gap left in the simulacra market before AI that could hold conversations came along. Video games simulate the adventure and comoradarie that most young guys deeply desire but can’t access, social media satiates our basic desire for social connection… now we can fake deeper connection. Maybe a toy robot will fill that lonely hole. Maybe if enough people get sucked into these things and find their way back out, it will lead to communities rejecting technology more broadly…5
Jobs
Although I feel like everyones’ jobs could be affected in one way or another, AI will definitely affect some classes of labour more than others. I imagine the majority of manual labour that isn’t already automated will stay as it is. I’m not sure if resilient versatile robots are at a stage where they can be mass-produced, but even then I feel like robots doing plumbing or construction wouldn’t be much more efficient than people doing it… Any work where human connection is important would also last, which would include a lot of jobs in healthcare (physical and mental) and service.
White-collar jobs will probably be affected most in the short-term, because that kind of information work is exactly what the AIs that became famous over the last year or so (LLMs; large language models) are good at. Fields like law, finance, and media production that involve a lot of viewing and remixing information will probably be impacted very significantly. Since this class is generally considered the most valued and influential, we’re probably going to be hearing a lot about how it gets affected. The industrial revolution affected a lot of jobs as well, and greatly increased our standard of living, but at some costs to the environment and traditional social structures. I feel like the digital revolution does similar things to information work that the industrial revolution did to physical work.
AI art has already been a huge discussion, and that’s going to continue as people will be able to mass produce every aspect of movies etc. Earlier this week a new text-to-video AI came out which produces video of animals with better details and mannerisms than I’ve ever seen from animation… Speaking of fake video, imagine what deepfake audio and video could do to politics and evidence-based legal cases… yeesh. Anyway, back to art, I think there are 2 skills you need (among others) for good art, writing, speaking, etc.: one is tasteful remixing of previous information, but the other is how you apply that to your specific context. Art that “speaks to you” relies on the latter; how well will AI be able to learn how to “read the room”? I also imagine there will be some backlash to AI art through people preferring art in physical mediums from people they know, but I’m pretty sure there will always be a demand for tasteless art to decorate hotel walls and whatnot.
Work which involves a lot more direct feedback between knowledge and physical reality will probably be more resilient. I was talking to a prospective chicken farmer recently and he’s looking into ways that AI can dynamically automate some of the climate control in barns, but it seems like farming has already been automated so mch that there’s not much space for more job loss there. There will always need to be a guy who knows about chickens who can check on them. We’re always going to want our healthcare professionals (whether physical or mental) to be actual humans who we feel can empathise with us. Similar for service workers; perhaps many storefronts can be automated but I feel like we’ll always prefer real restaurant staff. Environmental work is similar; a computer will never be able to do it because of the direct interaction necessary. Maybe robots in the long-term future but as long as we care about interacting with and identifying bugs and birds and weeds I think there’ll be work for humans to do related to that.
Part of this is also related to the fact that in order for an AI to learn something, the information has to already exist in a format legible to the AI, and someone has to care enough about the information to train the AI on it. ChatGPT is bad at describing insect (and even bird) identification because most insect identification resources are locked behind paywalled academic articles, and nobody cared enough to ensure that its training had any focus on relevant resources. The Merlin Bird ID and iNaturalist computer vision AIs were intentionally trained on bird and other organism identification and a lot of effort went into putting information in a format that the programs could use. Merlin can identify bird well from photos and videos, and there are other AIs that can count people in crowds. But as far as I know there isn’t currently an AI that can count and identify flocks of birds from drone footage, for example, because nobody has put the effort into training an AI for that specific context. Merlin took years of volunteer training effort to get to its current state, and it’s working off a huge high quality photo dataset. It’s going to get easier and easier to train AIs to do whatever you might want them to do, but someone’s still going to do that in order for it to exist. There will always be obscure bugs and plants etc. for people to research that AI will never be aware of until after the research has been done and published. Whether people will always care enough to fund that research, I’m less certain of.
To go back to the original question about whether we’ll be offloading our thinking to AI, I think that like any tool, AI just reinforces and strengthens whatever we’re already doing. It’s going to repeat back to us whatever values and priorities we feed into it. For example, if you ask ChatGPT how to get rich, it will just tell you how to get rich. Maybe you want to make money to support some higher goal, or maybe you think that getting rich will make you happy and give your life meaning. ChatGPT doesn’t know or care. It will give you answers to the questions you ask, but it doesn’t help you ask the right questions in the first place.
AI is clearly powerful. However I don’t think the biggest concern is with the tool itself, rather with those who use it. Anyway, we’ll see what happens…
Some other things that have been on my mind lately…
A charming post from the best-named Substack:
Maybe go look at some flowers or something. CS Lewis has good thoughts on doom, for those interested in the topic.
or so I’ve heard
Although I don’t have experience, I get the impression that the priority disparity is even larger for dating apps, and a large part of why they suck…
Not to mention… real life.
Edit: Yikes wow they’re a privacy nightmare too.
Interesting thoughts. As one who specifically moved away from management work to handyman work because of the coming of AI I do not see how AI loaded robots could do trade work. I could imagine robots doing very basic aspects of the work. But there are too many unpredictable variables in this sort of work and too many close judgement calls that are context specific for it to be a reality. I could see it being attempted by some ambitious developers seeking to cut corners but we would get the same results (or worse) than when they cut corners and hire cheaper and less experienced contractors. Ie., buildings flood, catch on fire, and sometimes fall down. Any perceived short term gain is lost in the mid to long term cost associated with bad construction. I know, I managed some high profile projects where corners were cut and it results major loss of revenue, big insurance claims, lawsuits, fines and so forth. As I see it, skilled labourers are sitting in the best position in regards to AI's impact on the job market.
I've written on AI art and I think you are basically correct. It's going to become common and the demand for human-mad art is likely to go up as a result.