Top 9 Ways ChatGPT Will Accelerate Society’s Collapse

Top 9 Ways ChatGPT Will Accelerate Society’s Collapse

Joberty
8 min read

And what to do about it

by Adrien Book

ChatGPT-3.5 is here. Wait, no. ChatGPT-4 is here. It will revolutionise…everything. Don’t ask for more details. It just will. Negative consequences of having to pay $20 to talk to the closest thing we have to a god? None! Your job? Safe! The very fabric of society? Unscathed!

audio-thumbnail
Top 9 Ways ChatGPT Will Accelerate Societys Colla
0:00
/12:05

The hype around generative AI has gotten so wild that even the creator of ChatGPT, Sam Altman, is begging Twitter Tech bros and Lazy LinkedIn influencers to lower their expectations. He quipped during a recent interview that “people are begging to be disappointed, and they will be”. In fact, disappointment is perhaps the best we can hope for. This technology will widen an existing socio-economic rift in ways that will make use long for the bad old days of social media psyops. We will come to regret not regulating the technology faster. Here’s why.

1. Generative AI will bury the middle class

Globalisation saw many blue-collar jobs offshored from western countries to developing nations. Then came automation and robotisation, and entire industries worth of jobs were wiped out. The same is happening today to white-collar jobs. COVID made remote work successful, and companies are realising that a knowledge worker in India can do the same job as someone at home in San Francisco… for a 10th of the cost.

But this is just the beginning. While remote work is offshoring jobs in an all too familiar pattern, AI tools such as ChatGPT will make entire industries completely redundant. Customer care centres, for example, will no longer have a reason to be, except for premium brands. That’s half a million jobs gone in the blink of an eye. In total, 20% of workers will have over 50% of their work tasks impacted by AI. The middle class has slowly shrunk over the past decade; 2023 is when the cracks become too large to be ignored.

  • What to do about it: Labour unions should work to protect some roles from being too eagerly automated. This has already begun in Hollywood and needs to become widespread.

2. An AI social rift will be created

As highlighted above, low-effort knowledge jobs will be replaced by AI in the very near future. But that will not apply to service jobs at luxury brands. In fact, these companies will use human to human communication to highlight the benefits of paying a premium. This trend has already started and is getting more pronounced every year. You can only talk to your airline if you have a first-class ticket, avoid being tracked if you have an iPhone, get size advice at Gucci

The increased implementation of AI will (continue to) make one thing clear: there is one system for the rich, another for everyone else. This has always been true but will take on a new dimension in the coming years. Soon, talking to a human for customer support will be an exclusively upper middle-class experience. The same can be said for human-made videos, books and articles, which will become luxury products.

  • What to do about it: At this point, boycotts are most likely to work. While it’s easy to shop at the cheaper brands, doing so has a cost. We may come to regret too late if the brands are not fought when they make clearly unethical decisions.

3. AI will fuel the Culture Wars

We’ve already seen plenty of proof that generative AI tools like ChatGPT are being used to create thousands of articles, in a matter of hours, about any topic. And because — as highlighted above — society is becoming more divided than ever, the media will soon be flooded with content claiming all things and their opposite. Journalists won’t be able to disprove anything; they will either be working for their “own” side… or automated. The echo chambers will be hermetically sealed.

We know from the past 5 years that it does not take much misinformation to tear the fabric of society appart . That was just a prelude. Now that everyone has picked a side for the culture wars, get ready for the real Post-Truth society. In a sign of things to come, a recent report from the Council on Foreign Relations says that if the U.S. doesn’t figure out how to retrain workers displaced by AI, politics is sure to grow even uglier than now.

The real winner will be the technologist, whom will be ignored as entire populations learn to live with the ever-present and distinct smell of tear gas. Stop me if you’ve heard this one: a powerful and privileged class weaponizes technology while monetising their monopoly on violence to create elaborate diversions for the masses.

  • What to do about it: Support trustworthy journalism if you can. Talk to your neighbours. Try to understand others. Ask questions.

4. Deepfake videos will topple an S&P500 CEO

It’s not only fake text and images that are getting better. Deepfake videos, too, are increasingly becoming easy to make by the general public. Worryingly, companies and governments clearly don’t have the proper tools to fight them. Short of resorting to full-scale authoritarianism, that is…

It’s only a matter of time before a video emerges of an S&P500 CEO doing something ungodly to a pig. That video will be fake, but it will not matter as social algorithms are tuned to make controversial content spread faster than truths. The unlucky executive’s stock will tank before the truth comes out. Once the flood gates have opened, we will not all be at risk; for once, the powerful will suffer first. But the government-mandated medicine that will swiftly follow will likely be worse than the disease.

  • What to do about it: The first deep fakes will be fun. They already are. The thousandths may be a lot less so. Punch up, never down, and stay mindful that the algorithms are working against us, not for us.

5. All politics will become about content moderation

A Post-Truth society will likely birth a new wave of political questions. Should AIs be banned? Should all training data be government-approved? Who is liable in case of mistake? Are falsehoods illegal? How can we tell? What is considered spam? How hard should we strike down on them?

In 2023 and beyond, all politics will be about content moderation, with one question being repeated: how much freedom are we willing to give up in exchange for increased security? Seeing how many countries veered towards authoritarianism in recent years, it’s likely we’re already seeing the beginning of an answer. Propaganda and total governmental control over media are, in a way, the strongest content moderation possible. That’s a high cost to pay to be able to ask ChatGPT to cheat on one’s homework.

  • What to do about it: Vote in every election. Ask politicians what they believe to be the answers to the questions above. Hold them accountable. Care.

6. We will witness a training data disaster

Present AIs are trained on man-made datasets. The articles we write, the images we draw, the videos we make… But future AIs may be trained on datasets created by AIs, purposefully or not. This would, over time, amplify a thousand-fold any mistakes or unethical behaviour that may have been hidden in the original training “material”.

In the coming years, a series of human errors will lead a prominent tech company to release an algorithm trained on artificial data, resulting in a very public disaster. The smart money is on a self-driving car accident, but we’ve heard enough about autonomous weapons to know it could be much worse than a few car crashes.

  • What to do about it: For the love of god, don’t let autonomous weapons become a thing.

As people and companies start to create content using generative AIs, and as that content gets fed and digested (mistakenly or not) by other algorithms, we will witness the death of popular culture. All content will become increasingly homogenised to appeal to the lowest common denominator. And I’m sure it’ll be good. We’ll eat it up. It will have been designed to be the best entertainment mathematically imaginable.

But in doing so, it’s likely we will lose the little kinks and pieces of weirdness that make life (and content) fun and unexpected. The generic formulas are tried and tested… but we should aim to be surprised, and surprise others in ways an AI couldn’t have predicted.

  • What to do about it: We should foster human creativity whenever possible. We can do that by promoting diversity within our culture, embracing unusual collaborations, and encouraging experimentation and risk-taking. Regulations and ethical guidelines would help, but don’t hold your breath.

8. Artistic labour will be marginalised

By over-relying on AI, we will essentially kill the artist. This has already begun, in fact. This, above all other items mentioned in this article, would be what bring society to its knees.

Art reflects the values, beliefs, and experiences of a society or a particular group within it. Art has the power to evoke emotions and create a shared experience among people. Art encourages dialogue and debate by challenging existing norms and beliefs. Art stimulates creativity and imagination, encouraging individuals to think beyond existing boundaries and explore new possibilities. Art serves as a record of history, preserving cultural heritage and collective memory. Art addresses societal issues, inequalities, and injustices. I could go on. With art, and the artists that make it, we’ll be lost.

  • What to do about it: make dumb art. Make art that cannot be monetised. Show it to others. Explain what it means to you. Connect your experiences with those of others. Only then can we grow.

9. Death will (kind of) die

With more and more AIs being readily accessible and easily usable, it’s only a matter of time before we use those tools to counteract the loneliness epidemic brought about by social media and other related social technologies. It’s only a matter of time before such tools become wide-spread. It may start with Cortana or Alexa-like avatars, but will quickly evolve. For those with no other options, talking to an AI is better than nothing.

And few have less options than those morning the loss of a loved one. There is definitely a generative AI market targeting these people. Whether that market is ethical is another story. We’ve already seen start-ups working to use artificial intelligence to re-create the voices and personalities deceased loved ones. And that’s probably just the beginning.

If we never learn to move forward, how can we ever be part of a productive society?

  • What to do about it: People dying is part of life. Don’t cry because it’s over but smile because it happened. See a good therapist.

Technology has both saved and doomed us in so many ways that one might be inclined to think that it is inherent to our nature to create tools that simultaneously do both. We at Joberty are trying to use it in the best way possible with our perfect match algorithm.

Generative AI is no different. It is essential to proactively address the ethical, social, and economic implications of these new tools. Instead of being swept away by the hype, we should collectively prioritise responsible development, transparent governance, and robust regulation of these technologies.

It will be difficult. This is however, not a reason to avoid persevering. Tackling complex challenges in the face of daunting odds is what humans do best.


[10:27 AM]