“Spicy autocorrect.”
That is what applied algorithmic ethics pioneer Dr. Rumman Chowdhury and her colleagues at Humane Intelligence, a nonprofit launched in February and devoted to uncovering issues in AI systems, call the large language models (LLMs) that power generative AI like ChatGPT, Bard and LLaMa. These models, says Chowdhury, are “not some machine version of the human brain” but rather “a sophisticated application of predictive text.” They can be very good at predicting what a human would say next in that context based on their learning from ample data sets of text created by human beings. But is machine-produced mimicry, even if done in inventive ways humans had not themselves predicted, equivalent to the living being who originally created that text, that piece of art, that musical contribution? Should that machine be defined as an equal to the human being who first had the thought and felt compelled to share it with his or her fellow humans?
Meredith Broussard, a leading researcher and educator in the field of algorithmic bias, would surely say no, reminding us that “AI is just math.” Can that math do impressive things? Absolutely. Will that math have profound impacts on the world now and in the future? Surely. Is it a living force that, now unleashed, cannot be held back or redirected, one that will compete with human beings for dominance on every level, from intelligence to work performance to even their very nature as living beings? I would argue no.
AI and robotic technologies are not living. They are math and mechanics created by humans, and it is critical that we not lose sight of the humans behind the AI curtain. ChatGPT was created by the humans behind OpenAI, including Sam Altman, Trevor Blackwell, Greg Brockman, Vicki Cheung, Reid Hoffman, Andrej Karpathy, Durk Kingma, Jessica Livingston, Elon Musk, John Schulman, Ilya Sutskever, Peter Theil, Pamela Vagata, and Wojciech Zaremba. Though it began as a nonprofit endeavor, it now operates as a “capped” for-profit enterprise (a “cap” Altman would not clarify in a recent interview) bringing CEO Sam Altman and his team a projected $1 billion in revenue by 2024. It operates through the engineering and coding talents of dedicated (but publicly unnamed) teams of human researchers at work daily to advance the technology. It is improved and made safer by underpaid and undervalued human beings, including the Kenyan workers paid less than $2 an hour to label examples “pulled from the darkest recesses of the internet” and including descriptions (in graphic detail) of situations “like child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest” for OpenAI. It takes an army of humans to create the “magic” of AI, a project that the vast majority of humans did not ask for and may not, in fact, want.
As Meredith Broussard noted in an interview with Tate Ryan-Mosley of MIT Review, ““One of the things I realized, as a cancer patient, was that the doctors and nurses and health-care workers who supported me in my diagnosis and recovery were so amazing and so crucial. I don’t want a kind of sterile, computational future where you go and get your mammogram done and then a little red box will say This is probably cancer. That’s not actually a future anybody wants when we’re talking about a life-threatening illness, but there aren’t that many AI researchers out there who have their own mammograms.”
That’s not actually a future anybody wants. How many of the impacts that we are now seeing from AI and automation fall into the category of things the vast majority of humans didn’t ask for and don’t want? I have a good friend who is a nurse at a local hospital, and when you ask him what kind of nurse he is, he will tell you, “I’m the one who holds their hand.” That is what human beings, living beings, want. We want connection. We want authenticity. We want a warm hand rather than the cold metal of a machine. We want someone who knows what it feels like to be human, to truly live in this complex and often challenging world, and who will join us and support us on the journey.
That want has led to an overall distrust of AI and of its creators, as a new poll from the AI Policy Institute makes clear. It also makes clear a deep fear over the future impacts of AI, demonstrating that 86% of respondents believe AI could accidentally cause a catastrophic event, 82% don’t trust tech executives to regulate AI, 72% prefer slowing down the development of AI, and 62% are primarily concerned about AI versus just 21% who are primarily excited about it. More concerning to me, though, is the notion that 76% of respondents believe that AI could eventually pose a threat to the existence of the human race and that 70% agree that mitigating the risk of extinction from AI should be a global concern. My concern over those specific numbers stems from their future focus on a grand act of extinction rather than a present focus on realized harms to human lives in the here and now.
The more we buy into this notion that AI is somehow living, somehow equal to the irreplicable, irreplaceable human, the more we grant it “independent actor” status rather than recognizing the humans behind the AI curtain, humans profiting in the billions from it and, to date, escaping any liability for the harms their creations inflict. As Chowdhury reminds us, “There is such a significant disempowerment narrative in [AI] Doomer-ism. The general premise of all this language is, ‘We have not yet built but will eventually build a technology that is so horrible that it can kill us. But clearly, the only people skilled to address this work are us, the very people who have built it, or who will build it.’ This is insane.”
Note her language used there. We (humans) have built it or will eventually build it. The people skilled to address this work. The very people who have built it. These are not mystical, living machines. These are technologies built by humans, trained on the work of humans, deployed by humans, used by humans. And humans, those involved in their development and deployment, should be seen as responsible for their creations. As renowned scholar and activist Joy Buolamwini argues, “AI as it’s imagined and dreamed of being is this quest to give machines intelligence of different forms, the ability to communicate, to perceive the world, to make judgments. But once you’re making judgments, judgments come with responsibilities. And responsibility lies with humans at the end of the day.”
As a person adept with the English language, I could certainly expand the definition of what constitutes a living being in ways that allow AI in. The creative nature of language offers us that opportunity. But to what end would we be doing so? Such an expansion makes AI the party responsible for its actions rather than the human actors who created and deployed it (and who profit handsomely from it). Shifting the definition shifts our attention away from those living beings and from the real harms being done by their creations right now. Doing so also amps up fear in the minds of humans across the globe, leaving them disempowered and fearing that all of this—including a potential catastrophic event—is inevitable, because the tech titans and the press have told them so. They are left to imagine this all to be prophesy rather than the possibility it truly is.
The truth is, AI is math. AI is spicy autocorrect. And it is something we human beings can choose not to hand our power over to. We can choose works from human authors, artists, creators. We can bet on ourselves and do our own work, creating something truly unique, truly awe-inspiring, truly extraordinary in the process. We can elect to work for an employer who values human talent and creativity over the efficiency and bolstered bottom lines automation may bring. We can demand human contact from the companies we do business with rather than settling for the incessant frustration of automated systems. We can recommit to connection, to face-to-face contact rather than comments posted on Facebook. We can choose caregivers and providers who hold our hand through the harrowing moments this life can sometimes bring. In doing so, we revalue the living beings in our midst, and we openly reveal the living beings behind the AI curtain.
Copyright © 2022 Thinkverum.com - All Rights Reserved.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.