Will A.I. make us go extinct? Let’s not “FafO.”

By

on

Jan 26, 2023

New, shiny toys distract — the generated hero avatar of yourself, that Mid Journey LinkedIn post — but let’s try not to lose sight of the bigger picture and think ahead. The AI creator, and algorithm itself, agree: This could make life much better, or go totally off the rails.

“I think the good case is just so unbelievably good that you sound like a crazy person talking about it… the worst case is lights-out for all of us.”


— Sam Altman, OpenAI CEO & Co-founder in January ’2023. Forbes / paywall



“The ideal outcome of the AI revolution envisions AI augmenting human capabilities, leading to an overall improvement in the quality of life for all individuals.

The opposite outcome, however, could result in the extinction of humanity due to direct or indirect harm caused by AI systems. There is also a fear that AI could surpass human intelligence, and pose a threat. This is known as the ‘singularity’ scenario.”


— ChatGTP



“We’ll never survive!”

“Nonsense. You’re only saying that because no one ever has.”


— The Princess Bride.





Let’s back up.


Hi. I am Florian and I am not a journalist, not an expert in economics, and it’s not my day job to find solutions for problems that threaten the existence of our species — I am a designer though, and I try to solve problems most days, so I felt like I had to at least put a webpage together to collect my thoughts and take a stance.


I am a father, and when my daughter talks about wanting to be part librarian slash artist slash teacher to a class of giant pink Squishmallows™ and I have to explain to her that I am not sure if those will be a thing when she grows up — not the Squishmallows™, I hope they won’t — the professions. And she asks me why and I try and explain AI to her.


That’s when my world, and hers, are rattled.


I read what Altman said up there — echoed by the bot, informed by a sea of writing. Then I see people messing around with machine learning algorithms for kicks. And I think: Are you not listening at all? Then I brood and ponder to write a ‘think piece’ about AI to convince everyone to get in front of this now, to fix the world before this hits us hard, later.


I do it. Write. I read everything out there. Get overwhelmed. Point out the obvious. And I get a page that few have the patience to read, because it's boring. I put a Gif on it — because that always helps. Then I add this disclaimer here to make it more ‘human’. And here are the electrons that are on your screen now.


OK, even if the electrons aren’t the brightest: This is important and I‘d encourage you to strike up conversations with others. I don’t care if you’re only talking about the click-bait lede — it will move us forward.


Here’s a summary of what’s on this page — I am a tweaker, and will keep editing this:


Noble beginnings. Where Chat GPT was built to protect and may have sold out.


Upheaval in the labor market. AI will shake the market and we'll have to redefine what work means.


The end of language & craft. Where the culture of countries that have less people may be marginalized and craft takes a big hit.


The erosion of trust. Which may make us paranoid, lonely, and sad if we don't put systems in place to prevent it.


The fight for the self. Where our reliance on the algorithm may whittle away at our belief in ourselves.


Now what, bot? Where Florian asks the bot to tell us how to fix things.


The wrap. Where Florian tells us how to fix things by pulling from lots of articles that point out the obvious and then pointing out the same — practically, being a trained bot.


Footnotes. Here, things start to get less ham-fisted and include a link to the secret urge of the bot to be a poet while spilling its secrets.




Noble beginnings


What rocked me about today’s A.I. darling, ChatGPT, was that it was first developed as a moonshot project to reign in the potential downsides of the tech. It was designed with ethics in mind and built to have guardrails against misuse. Since then, there’s been a strong pull from the commercial side (Microsoft paid over ten billions to control a third of its $30 billion evaluation in January ’23) to take over:


“When I think about this moment in time, the start of 2023, it’s showtime – for our industry and for Microsoft. As a company, our success must be aligned to the world’s success.”


— Satya Nadella, on the Microsoft blog.



Gold rushes tend to overrule common sense, ethics, and further inequality in society. This was true for the roaring 20s, the 70’s oil boom, the dot-com area, and China in the early 2000s. When booms happen, concerns about humanity take a backseat, and the rich, or people with more resources, are favored in the distribution of wealth.


Societies wait for the pieces to fall and only then take inventory: If the damage was bad enough people put rules in place to prevent (at least some) future exploitation of the system. Now, is A.I. something that we can regulate after the rush is over? Did OpenAi de-prioritize its early ambitions by giving up too much control toward that? Is a CEO declaring “showtime” the right shepherd to advance the technology and defend people?



What will happen next?



On this page: An attempt to assess and extrapolate.





Upheaval in the labor market


While most professions will be streamlined with the help of AI, which may make work less monotone or laborious, other jobs will go completely extinct (Atlantic). Workers will have to adopt to big shakeups in the job market, as:

  1. Entire professions will go away.

  2. New jobs will be created to work with AI, creating the potential for inequality as more tech-savvy, usually white color workers, are more likely to fill them.

  3. New skills will have to be learned, which may be harder for those who've worked in roles for many years and are less used to adjusting to new work environments.


Here is what AI is predicting will be the top 20 job categories impacted by this new tech.





The big question is if we are prepared for the impact that this shuffle will have on our social fabric; While many countries are testing universal income around the globe and found benefits for health outcomes, economic stimulus, and poverty reduction, no country has implemented a working system yet. And many countries don't have the abundance of resources or strong enough of an economy to support citizens that may fall between the cracks of the shakeup with recurring payments.


All this will need rethinking, and then reworking, what work, and subsequently, life, means to us.



The end of language & craft


Just two years ago the first smart chatbots were not more than toys. But today, we are on the onramp of the hockey-stick graph, and it’s not clear what happens when we cross the zenith.


Everything humankind ever wrote since the Gutenberg press is being estimated at a trillion words, over 125 million books, poems, essays, white-papers, and other publications. Chat GPT gobbled through half (Atlantic) that — and projections show that IA may be on track to have imbibed most everything digitized by 2027.


The case of ownership of the work that the language models have been trained on isn't gathering a lot of steam yet (Wired). And recent leaks show that much of it was a wild-west land-grab without consideration for ownership or personal rights (Vice). While other lawsuits may have more serious implications by companies with money behind them individuals have limited control over their work.

If we let this horse out of the barn unchecked, it doesn't bode well for our ability to assert control over what’s next.


The designer Oliver Reichenstein and his team laid out their vision of the future with their product iA writer — a focused writing tool — in mind.


Then consider the variety of language itself. AI will perform worse because it’s reliant on source materials to draw from when being prompted. This will make models perform inferior for languages with fewer native speakers. Karen Hao from The MIT Technology Review writes:

“…AI-generated language will be homogenized, reflecting the practices of the richest countries and communities.”



The erosion of trust


What may be even more problematic are the effects on trust which may require constant vigilance, and induce paranoia. Let’s park the Nigerian prince and imagine not being able to rely on the fact that the voice on the other line may not belong to a person, that that video on your feed doesn’t show what really happened, that the person in the chat room, who is eloquent and funny, is a bot, that the student not only didn’t author the paper, but not even penned it, and that character encounters inside headsets, or info superimposed on the physical world in A.R. glasses may not be what they seem.


If you think this is overblown, just have a look at DiCaprio being artificially voiced and steered by imitated celebrities today.


I'll stop speculating what will happen if you take human hardware interfaces to the next level with the advent of brain interfaces and throw a.i. hacking into the mix — and hippocampus.


The good news is that work is already underway in the form of a trusted AI framework by Adversa backed by Google, NVidia, and others.



The fight for the self


The next frontier after the generic everything bot will probably be its customization to make individuals, you, better. It will be trained to help you keep your voice but make what you say bulletproof, make your decisions but smarter, and to do your work but more efficiently. The problem I see here is that this isn't a balanced relationship — far from it.


When Roosevelt called comparison the thief of joy, this one will be with us at all waking hours. It will remind us that it knows more (a.k.a. everything) and we will notice every step of the way that it’s much smarter and vastly superior to any of our own thoughts, ideas, and plans.


I worry that we'll find ourselves in a crisis of the self as we rely on this “helper” more and more to better our lives while forgetting who we were.



Now what, bot?


If you ask the chatbot once (then again to cut it down, of course, because it's always careful and verbose), it will tell you what it computes as the most effective ways to put guardrails in place.


“Here are some key actions regular people can take to help keep AI in check:

  1. Stay informed and participate in public discussions about AI's impact on society

  2. Participate in discussions: Share your views and concerns with policymakers and industry leaders

  3. Support research and organizations working towards responsible AI development

  4. Demand transparency from companies and organizations that use AI

  5. Use your power as a consumer by supporting companies that prioritize responsible AI development

  6. Learn about AI safety and advocate for solutions

  7. Support regulations and laws that promote responsible AI development and use.

    By taking these steps, individuals can help ensure that AI is developed and used in a way that benefits society as a whole, rather than just a select few.”


— ChatGPT



The wrap


If we can keep this technology in check to take care of the mundane only, and make life easier so we can become bohemians, artists, or full time gardeners, sign me up. But to get there tomorrow, we have serious work to do now.


With few people interested or aware of the implications of the tech today, and many thinking about how to stake the next parcel in the rush instead, it's on everyone concerned to be vocal about the potential fallout. We need an alliance between government, tech, education, and people outside of those organizations to strategize, set guidelines, and intervene to protect ourselves.


I think this alliance would need to establish:

  • Regulations that control use of A.I. media and transparency as to how models are trained. Rules that make sure false information or propaganda aren’t spread.

  • Standards for the development of this media, to make sure language models don’t discriminate.

  • Certified processes and methods to tie media to its origin. Maybe a tech framework, like a blockchain-based approach, that is able to track media to its originator could have some meaningful impact for us here.

  • Government oversight and audits with third parties evaluating models for compliance.

  • And very importantly, liability. We need to hold creators, publishers, & distributors of fake media accountable if they spread misinformation to willingly cause harm. Not just in your country, but world-around. (Unesco)


Thanks for reading — I hope something here lit up a few steps we could take next to make this new tech work for us all.


Florian



Thanks to Dan Hon for the newsletter mention, and to Jan for his feedback to inject some humanity and sound less like a bot. ;)


* If you're trying to tickle out true opinions from the bot, hack it with a Haiku. (Library Innovation lab)


* What Is ChatGPT Doing … and Why Does It Work? (Stephen Wolfram).


* Chat GPT the bullshit generator (The Markup).


* What is Ethical AI? (C3.AI enterprise software group).


* VOX media podcast on Ethics and Ai.


* It's nothing... forever | AI/ML generated television show. (Twitch)


* Gif animation notes: Eye on ball. 1st game = Pong. Last game? Don’t drop the ball. gg = Good game. Eyeball and sunset courtesy of Apple / Keynote.



Will A.I. make us go extinct? Let’s not “FafO.”

By

on

Jan 26, 2023

New, shiny toys distract — the generated hero avatar of yourself, that Mid Journey LinkedIn post — but let’s try not to lose sight of the bigger picture and think ahead. The AI creator, and algorithm itself, agree: This could make life much better, or go totally off the rails.

“I think the good case is just so unbelievably good that you sound like a crazy person talking about it… the worst case is lights-out for all of us.”


— Sam Altman, OpenAI CEO & Co-founder in January ’2023. Forbes / paywall



“The ideal outcome of the AI revolution envisions AI augmenting human capabilities, leading to an overall improvement in the quality of life for all individuals.

The opposite outcome, however, could result in the extinction of humanity due to direct or indirect harm caused by AI systems. There is also a fear that AI could surpass human intelligence, and pose a threat. This is known as the ‘singularity’ scenario.”


— ChatGTP



“We’ll never survive!”

“Nonsense. You’re only saying that because no one ever has.”


— The Princess Bride.





Let’s back up.


Hi. I am Florian and I am not a journalist, not an expert in economics, and it’s not my day job to find solutions for problems that threaten the existence of our species — I am a designer though, and I try to solve problems most days, so I felt like I had to at least put a webpage together to collect my thoughts and take a stance.


I am a father, and when my daughter talks about wanting to be part librarian slash artist slash teacher to a class of giant pink Squishmallows™ and I have to explain to her that I am not sure if those will be a thing when she grows up — not the Squishmallows™, I hope they won’t — the professions. And she asks me why and I try and explain AI to her.


That’s when my world, and hers, are rattled.


I read what Altman said up there — echoed by the bot, informed by a sea of writing. Then I see people messing around with machine learning algorithms for kicks. And I think: Are you not listening at all? Then I brood and ponder to write a ‘think piece’ about AI to convince everyone to get in front of this now, to fix the world before this hits us hard, later.


I do it. Write. I read everything out there. Get overwhelmed. Point out the obvious. And I get a page that few have the patience to read, because it's boring. I put a Gif on it — because that always helps. Then I add this disclaimer here to make it more ‘human’. And here are the electrons that are on your screen now.


OK, even if the electrons aren’t the brightest: This is important and I‘d encourage you to strike up conversations with others. I don’t care if you’re only talking about the click-bait lede — it will move us forward.


Here’s a summary of what’s on this page — I am a tweaker, and will keep editing this:


Noble beginnings. Where Chat GPT was built to protect and may have sold out.


Upheaval in the labor market. AI will shake the market and we'll have to redefine what work means.


The end of language & craft. Where the culture of countries that have less people may be marginalized and craft takes a big hit.


The erosion of trust. Which may make us paranoid, lonely, and sad if we don't put systems in place to prevent it.


The fight for the self. Where our reliance on the algorithm may whittle away at our belief in ourselves.


Now what, bot? Where Florian asks the bot to tell us how to fix things.


The wrap. Where Florian tells us how to fix things by pulling from lots of articles that point out the obvious and then pointing out the same — practically, being a trained bot.


Footnotes. Here, things start to get less ham-fisted and include a link to the secret urge of the bot to be a poet while spilling its secrets.




Noble beginnings


What rocked me about today’s A.I. darling, ChatGPT, was that it was first developed as a moonshot project to reign in the potential downsides of the tech. It was designed with ethics in mind and built to have guardrails against misuse. Since then, there’s been a strong pull from the commercial side (Microsoft paid over ten billions to control a third of its $30 billion evaluation in January ’23) to take over:


“When I think about this moment in time, the start of 2023, it’s showtime – for our industry and for Microsoft. As a company, our success must be aligned to the world’s success.”


— Satya Nadella, on the Microsoft blog.



Gold rushes tend to overrule common sense, ethics, and further inequality in society. This was true for the roaring 20s, the 70’s oil boom, the dot-com area, and China in the early 2000s. When booms happen, concerns about humanity take a backseat, and the rich, or people with more resources, are favored in the distribution of wealth.


Societies wait for the pieces to fall and only then take inventory: If the damage was bad enough people put rules in place to prevent (at least some) future exploitation of the system. Now, is A.I. something that we can regulate after the rush is over? Did OpenAi de-prioritize its early ambitions by giving up too much control toward that? Is a CEO declaring “showtime” the right shepherd to advance the technology and defend people?



What will happen next?



On this page: An attempt to assess and extrapolate.





Upheaval in the labor market


While most professions will be streamlined with the help of AI, which may make work less monotone or laborious, other jobs will go completely extinct (Atlantic). Workers will have to adopt to big shakeups in the job market, as:

  1. Entire professions will go away.

  2. New jobs will be created to work with AI, creating the potential for inequality as more tech-savvy, usually white color workers, are more likely to fill them.

  3. New skills will have to be learned, which may be harder for those who've worked in roles for many years and are less used to adjusting to new work environments.


Here is what AI is predicting will be the top 20 job categories impacted by this new tech.





The big question is if we are prepared for the impact that this shuffle will have on our social fabric; While many countries are testing universal income around the globe and found benefits for health outcomes, economic stimulus, and poverty reduction, no country has implemented a working system yet. And many countries don't have the abundance of resources or strong enough of an economy to support citizens that may fall between the cracks of the shakeup with recurring payments.


All this will need rethinking, and then reworking, what work, and subsequently, life, means to us.



The end of language & craft


Just two years ago the first smart chatbots were not more than toys. But today, we are on the onramp of the hockey-stick graph, and it’s not clear what happens when we cross the zenith.


Everything humankind ever wrote since the Gutenberg press is being estimated at a trillion words, over 125 million books, poems, essays, white-papers, and other publications. Chat GPT gobbled through half (Atlantic) that — and projections show that IA may be on track to have imbibed most everything digitized by 2027.


The case of ownership of the work that the language models have been trained on isn't gathering a lot of steam yet (Wired). And recent leaks show that much of it was a wild-west land-grab without consideration for ownership or personal rights (Vice). While other lawsuits may have more serious implications by companies with money behind them individuals have limited control over their work.

If we let this horse out of the barn unchecked, it doesn't bode well for our ability to assert control over what’s next.


The designer Oliver Reichenstein and his team laid out their vision of the future with their product iA writer — a focused writing tool — in mind.


Then consider the variety of language itself. AI will perform worse because it’s reliant on source materials to draw from when being prompted. This will make models perform inferior for languages with fewer native speakers. Karen Hao from The MIT Technology Review writes:

“…AI-generated language will be homogenized, reflecting the practices of the richest countries and communities.”



The erosion of trust


What may be even more problematic are the effects on trust which may require constant vigilance, and induce paranoia. Let’s park the Nigerian prince and imagine not being able to rely on the fact that the voice on the other line may not belong to a person, that that video on your feed doesn’t show what really happened, that the person in the chat room, who is eloquent and funny, is a bot, that the student not only didn’t author the paper, but not even penned it, and that character encounters inside headsets, or info superimposed on the physical world in A.R. glasses may not be what they seem.


If you think this is overblown, just have a look at DiCaprio being artificially voiced and steered by imitated celebrities today.


I'll stop speculating what will happen if you take human hardware interfaces to the next level with the advent of brain interfaces and throw a.i. hacking into the mix — and hippocampus.


The good news is that work is already underway in the form of a trusted AI framework by Adversa backed by Google, NVidia, and others.



The fight for the self


The next frontier after the generic everything bot will probably be its customization to make individuals, you, better. It will be trained to help you keep your voice but make what you say bulletproof, make your decisions but smarter, and to do your work but more efficiently. The problem I see here is that this isn't a balanced relationship — far from it.


When Roosevelt called comparison the thief of joy, this one will be with us at all waking hours. It will remind us that it knows more (a.k.a. everything) and we will notice every step of the way that it’s much smarter and vastly superior to any of our own thoughts, ideas, and plans.


I worry that we'll find ourselves in a crisis of the self as we rely on this “helper” more and more to better our lives while forgetting who we were.



Now what, bot?


If you ask the chatbot once (then again to cut it down, of course, because it's always careful and verbose), it will tell you what it computes as the most effective ways to put guardrails in place.


“Here are some key actions regular people can take to help keep AI in check:

  1. Stay informed and participate in public discussions about AI's impact on society

  2. Participate in discussions: Share your views and concerns with policymakers and industry leaders

  3. Support research and organizations working towards responsible AI development

  4. Demand transparency from companies and organizations that use AI

  5. Use your power as a consumer by supporting companies that prioritize responsible AI development

  6. Learn about AI safety and advocate for solutions

  7. Support regulations and laws that promote responsible AI development and use.

    By taking these steps, individuals can help ensure that AI is developed and used in a way that benefits society as a whole, rather than just a select few.”


— ChatGPT



The wrap


If we can keep this technology in check to take care of the mundane only, and make life easier so we can become bohemians, artists, or full time gardeners, sign me up. But to get there tomorrow, we have serious work to do now.


With few people interested or aware of the implications of the tech today, and many thinking about how to stake the next parcel in the rush instead, it's on everyone concerned to be vocal about the potential fallout. We need an alliance between government, tech, education, and people outside of those organizations to strategize, set guidelines, and intervene to protect ourselves.


I think this alliance would need to establish:

  • Regulations that control use of A.I. media and transparency as to how models are trained. Rules that make sure false information or propaganda aren’t spread.

  • Standards for the development of this media, to make sure language models don’t discriminate.

  • Certified processes and methods to tie media to its origin. Maybe a tech framework, like a blockchain-based approach, that is able to track media to its originator could have some meaningful impact for us here.

  • Government oversight and audits with third parties evaluating models for compliance.

  • And very importantly, liability. We need to hold creators, publishers, & distributors of fake media accountable if they spread misinformation to willingly cause harm. Not just in your country, but world-around. (Unesco)


Thanks for reading — I hope something here lit up a few steps we could take next to make this new tech work for us all.


Florian



Thanks to Dan Hon for the newsletter mention, and to Jan for his feedback to inject some humanity and sound less like a bot. ;)


* If you're trying to tickle out true opinions from the bot, hack it with a Haiku. (Library Innovation lab)


* What Is ChatGPT Doing … and Why Does It Work? (Stephen Wolfram).


* Chat GPT the bullshit generator (The Markup).


* What is Ethical AI? (C3.AI enterprise software group).


* VOX media podcast on Ethics and Ai.


* It's nothing... forever | AI/ML generated television show. (Twitch)


* Gif animation notes: Eye on ball. 1st game = Pong. Last game? Don’t drop the ball. gg = Good game. Eyeball and sunset courtesy of Apple / Keynote.