19:33

Big Tech is playing fast and loose with our society - again

This is from What I Learned This Week - about the way Big Tech is now conducting a grand experiment with AI on our society.  With social media they moved fast and broke things - and how badly they broke things in now evident in our democracy and mental health - particularly among young girls. They are doing it again with no checks or guardrails.

In order to best understand the possible consequences of this AI Revolution, it is necessary to look back and understand what some of the unintended consequences have been from our previous experiments with the internet and social media. In fact, the internet is what has made these new AI breakthroughs possible, by amassing the large datasets needed for their training. Furthermore, the internet along with mobile computing is what will bring AI to the world in the coming years. In this way, both the internet and AI are inextricably linked. What we are about to witness will be part arms race, part land grab, and part gold rush at unprecedented scale and speed and with very little guidance or forethought.

      Big Tech’s “move fast and break things” attitude is about to go into full overdrive with AI. But what are the things that will be broken? In the age of social media these things were people: genocide in Myanmar, undermined democracies and political instability, and legions of depressed teenagers are just a few of the consequences of this reckless attitude. In fact, Facebook (now Meta) knew of these consequences at the time, as their own internal research has shown: “Thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse”.

Social media rewired our collective consciousness, and inserted algorithms that amplify outrage and polarization aimed right at the human amygdala like a loaded gun. This has had a disastrous effect on our collective psyche. Furthermore, it facilitated a spread of disinformation that eroded faith in our institutions and called into question long-proven science on everything from the germ theory of disease (during the Covid pandemic) to gravity itself.

These companies know that their current AI products are broken, but they feel the best way to fix them is in a grand public experiment on all of us. As a recent story in The Wall Street Journal put it, we will all be “guinea pigs” in this giant AI experiment.

There are a myriad of problems with conducting such a grand experiment on an unsuspecting public already weakened and demoralized by a broken social media system, but we think these are two of the biggest long-term concerns:

       First, is the displacement of our collective human knowledge with disinformation, misinformation, and outright gibberish. Worse yet, it will be authoritative gibberish—because style is one thing that ChatGPT always gets right, even if its substance can be horribly wrong. As many experts have noted, ChatGPT has proven to be able to produce extremely-convincing hallucinations, including false citations to non-existent scientific papers. Such falsehoods can take time to discover and disprove, and now they can be produced at scale and on the cheap.

This makes this type of AI extremely dangerous as a weapon of disinformation where authoritative style matters a lot and the veracity of content matters not at all. Contemporary disinformation warfare is all about style over substance and quantity over quality. As Steve Bannon once infamously put it: “Flood the zone with shit.”

The second problem is the human toll of these companies wanting everyone to interact with a mindless chatbot partially trained on some of the worst language and behaviors from the past fourteen years of a broken social media experiment.

In 1962, Arthur C. Clarke famously wrote: “Any sufficiently advanced technology is indistinguishable from magic.” And the creations of generative AI technology, such as the DALL-E image generator and text generators like ChatGPT, are currently inspiring the sort of public fascination, reverent awe, zealous enthusiasm, visceral fear, and outright hostility that perceived acts of “magic” have always historically received. As one NPR reporter recently put it: “The technology is both awesome — and terrifying.”

But this is not magic. A Large Language Model (LLM) is simply a technological solution to the problem of predicting the probability distribution of consecutive words in a sentence, paragraph, essay, and so forth. This is why psychology professor and AI scientist Gary Marcus has likened ChatGPT to “autocomplete on steroids.”

LLMs do not understand what the words mean that they are using. They only know what word is likely to come next in a sentence based on the corpus of examples they were trained on. As predictive tools, they simply have no symbolic model of the world around them, nor of their relationship in it, nor of the complex fabric of morals and behavioral norms in a complex society. In the purest sense, they are agnostic—having no idea of true or false nor of good or evil.

One reason that these LLM systems often respond in strange and hostile ways (that seem almost human) might be the voluminous amount of internet content that they are trained on, which include chatroom discussions and Reddit subthreads scraped from the web and social media by services such as Common Crawl. If one is building an AI training model for the conversational back-and-forth of humans, then internet chatrooms and social media are not exactly humanity at its finest. This may be why our first AI products are far from aspirational bastions of hope.

Furthermore, we may be expanding an existing runaway feedback loop of human beings behaving badly into new domains. We already know that social media has coarsened human interaction by algorithmically rewarding outrage and argument for the sake of user engagement. Now, we are spoon-feeding this resulting toxic discourse into into what is soon to be a ubiquitous army of AI chatbots and text generators across the networked world, which will, in turn, further pollute the internet, which will probably end up in yet more chatbot training, ad infinitum, ad nauseam. If five years from now, your toaster oven or vacuum cleaner goes on a rant and physically threatens you, this will be why...

       This is not to say that all Deep Learning AI is bad. Specialized machine learning systems have been able to predict chaotic behavior within complex systems. A truly miraculous feat, which could be enormously beneficial in complex fields of inquiry, such as medicine, epidemiology, climate science, economics, and risk assessment. It is this mad rush to conduct a massive public experiment with AI by some of the same cohort responsible for the last decade’s social media debacle that we are questioning.

Neil Postman, who presciently warned of the dangers of new technology to our political and social discourse forty years ago, offered one of the most important questions we should ask of any new technology: What problem does this technology solve?


SHARE:
Blogger Template Created by pipdig