Marc,
There are two moments in my life that have, for me, defined humanity. The first of those moments came on a late afternoon long ago at a time when I was geographically separated from my family, friends and all that I had once known. After leaving our home state of Montana the morning after our high school graduation, I traveled across the country to Arkansas, where the people spoke and behaved according to a different custom, where the landscape lacked any comforting familiarity, where even the temperature and air were far different from what I was accustomed to. For a time, I got lost in that difference. I felt unprepared for the world I was now living in. I was lost in my aloneness at the same time I was cast into a hyper-visibility to the rest of the community because of the speed of my speech and my lack of a southern drawl. Who I had been no longer felt useful or relevant and who I might become was muddied by uncertainty and a sense of dislocation. And on that late afternoon, with the weight of both aloneness and hypervisibility bearing down in ways that I could no longer hold back and hide, I knelt on the floor and wept.
Then I felt another kneel beside me, felt his tears fall on my shoulder, and knew that I was not alone. In that moment of shared grief, shared understanding, shared compassion, shared humanity, I came to see the emotional center of humanity, an ability to fully and truly understand what it is like to navigate this challenging world in a physical body that often fails us with the clock of mortality ticking away as we try to create a life of purpose, meaning, value. It was the touch of another, the tears of another, that healed me in that moment. With the depth of that aloneness dispersed, I was able to more clearly see the beauty of what was before me instead of the absence of what I had left behind. I came to love the people of that great state, came to appreciate the greener landscape (if not the humidity) and their slower pace of speech so that words were heard, felt, appreciated. I still treasure memories of my time in the South.
The second moment was more recent and will be one you are more familiar with, coming at another turbulent moment in my life where I once again lost my sense of self and direction. In the midst of a messy divorce, living in rural Montana where my PhD meant little in the way of economic opportunity, and with two children still reeling from the traumatic effects of a tragic car accident, I was stuck and seemingly out of options. There were no jobs in the region that I was educationally or experientially qualified for, my training too specialized for most regional employers. There was no money for a move across the country and, even if there had been, my custody agreement meant that I would legally have to leave my kids behind or order to make such a move. Jobs that were available to me and did not require a different type of advanced education would not cover the costs of providing for my family and the long hours worked in the attempt would have left my kids without the center of their emotional support as they tried to heal from their trauma. I managed to get an online instructor position with BYU-Idaho, a position I was deeply grateful for and still enjoy today, but that part-time role would not cover my family’s financial needs. I seemed to be out of options. I was quickly losing hope.
Then you stepped in with a big dream and an offer to come help you build it. As an engineer, you were acutely aware of the speed of technological innovation, particularly innovations in AI and automation, and their resulting impacts on society. You were deeply concerned about the ways those innovations would impact the ability of our fellow human beings to create the lives they chose in the regions they chose to live. You wanted to empower our fellow humans with the information they needed to still carve out those lives of meaning and to participate in the conversations about the technologies that have such a heavy impact on their lives.
On paper, I was not the right co-founder and co-creator for this project. Had you placed an ad through a job site built on a foundation of AI, I would not have been one of the top candidates that system selected and put before you. But you knew differently and better. You knew me to be a deeply empathic and driven human being who was committed to the welfare of each of her fellow human beings and of humanity as a whole. You knew my research and rhetorical training had prepared me to consistently question, to critically analyze, to see what was missing and, even more importantly, who was missing in these conversations. You knew me to be tenacious about getting the right information to the right people at the right time in order to create real impact. You knew me to be one who would look for the hope, the positivity, the reasons to press forward, the light to share with all the world. In short, you did not behave as a “biological machine” in the making of that decision. You behaved as a caring brother who wanted to help lift a sister stuck in difficult circumstances. You behaved as a compassionate human being able to see potential in alternative ways, able to see actuality that had not yet been realized. And you behaved as a committed member of society who believed that the best ideas come from diverse industries and sources and that, when seated at the same table, we are able to create new ways of seeing and thinking that help solve some of society’s most challenging problems.
Those promoting the rapid and unquestioned advancement of AI and automation want so badly to quantify humanity, to reduce its complexity to key attributes like intelligence — a term that is itself not well-defined in an agreed upon way, particularly between computer scientists and the general public — and to create an efficient definition that can be programmed into the algorithm and replicated by the machines. AI was built on that effort, the very name artificial intelligence invoking the computational metaphor now so persistent in our public conversations and leading to the circular logic that “the brain is a computer is a brain,” as noted by neuroscience expert Alexis T. Baria and cognition expert Keith Cross in a 2021 scholarly paper bearing that title. But reducing something so inherently nuanced, something felt and experienced and shared in ways that resist static definitions or any quantifications, “affords the human mind less complexity than it is owed and the computer more wisdom than it is due,” argue Baria and Cross. I would agree.
Such an act also risks dehumanization, a word that has been clearly defined by a consistent voice in the AI debates — that of computational linguist Emily M. Bender, co-author of the well-known “On the Dangers of Stochastic Parrots” paper — as “the cognitive state of failing to perceive another human as fully human…and the experience of being subjected to those acts that express a lack of perception about one’s humanity.” As Bender notes, “we see a lot of things going wrong in our present world that have to do with not according humanity to humans.”
As a rhetoric scholar, I know the importance of definitions as a way to create a common foundation on which to build a conversation, but I am equally attuned to their limitations. The static definition of the word alone offered by the Merriam-Webster dictionary is being “separated from others,” but human beings are very familiar with the sense of being alone in a crowd, alone in the midst of others, separated by a multitude of factors, including race, identity, emotional energy, general wellbeing, ideologies, interests, knowledge, capabilities, etc. The definition doesn’t capture the kaleidoscope of lived experience related to feeling alone. Similarly, the word potential is defined as “existing in possibility” that is “capable of development into actuality.” But who decides whether we carry that capability? Who gets the power to say the final yea or nay to possibility becoming actuality in our individual lives? What about in our collective lived experience? And are we willing to hand over that power and decide that someone, or some AI system, knows better than we do about our own possibility, individually or collectively, for actuality?
And what about humanity? Merriam-Webster offers that humanity carries with it the characteristics of being “compassionate, sympathetic, or generous” and that it is the “totality of human beings.” But which of us would confidently claim that those three adjectives adequately describe the full range of humanity we see extended to others in their moment of need around the world? Who among us would not say that we have seen it expressed differently, and beautifully so? And which one of us would today claim that the “totality of human beings,” or even the totality of a single human being, could be fully captured and replicated by AI?
I would argue that humanity is three things: it is felt, it is experienced, and it is shared. It is defined both by and through our engagement with and actions toward one another in specific contexts. It is that interconnectivity, that shared experience, that is central to its meaning. Compassion, sympathy, and generosity are experiential emotions. An AI system, by its very nature, cannot be those three things because it cannot feel. AI cannot provide a human touch, a hand on our shoulder, a hug so tight it squeezes the pain away, a shared smile to bring light back into the day. It cannot weep with us, its tears falling in shared solidarity and understanding, because it cannot feel and cannot understand. It predicts, it processes, it calculates, it sorts, it generates, and there can be real value to those efforts in some contexts and toward some purposes. But it cannot feel compassion. It can express compassion by using words it has been trained on that carry a compassionate tone, but it cannot feel that emotion. It can express sympathy but cannot be sympathetic. It can express generosity but cannot, itself, be generous. It neither feels nor understands those emotions.
Amnesty International recently used an AI-generated image of protestors, leading the art world to ask a critically important question for the field of photojournalism and for the general public who will view and react to those images: “Would the images have the same emotional charge if those instances of defiance, struggle, and camaraderie weren’t real?” I ask a related question about humanity: is humanity, in whatever beautiful form it is being expressed, real if it is not felt and experienced by both participants? If it is merely the words a human might say but is offered instead by a system without the capacity to feel or understand the meaning of those words, is it humanity?
In a recent article in the Intelligencer, Emily M. Bender noted that, “People want to believe so badly that these language models are actually intelligent that they’re willing to take themselves as a point of reference and devalue that to match what the language model can do.” We are so much more than what AI can mimic. We are so much more than biological machines. We are so much more than definitions can adequately capture, so much more than what can be quantified. As a student of mine who works as a paraeducator recently said in frustration with trying to accommodate AI’s recommendations for revisions to his resume, “Adjusting my resume to get an 83% had my resume looking absurd. It would be crazy to say something like ‘prevented 120 emotional crises.’”
AI won’t be the solution to dehumanization. It is a central part of the problem. It will take human beings experiencing shared humanity to recognize errors of ignorance or inhumanity or inattentiveness and to correct those errors. And some truly extraordinary human beings have been doing exactly that for the past 23 years, beginning in Denmark and now spreading to 85 countries on 6 different continents. The Human Library is an organization that pairs human “books” with fellow human “readers” in a “special dialogue room where taboo topics can be discussed openly and without condemnation. A place where people who would otherwise never talk find room for conversation.” They create opportunities to “unjudged someone,” or, in other words, to correct the mislabeling we have each individually done to each other. Face to face. Open heart to open heart. Human to human.
AI already has embedded within it our errors with one another. Our mislabelings. The ways we have gotten it wrong. It won’t be the AI system itself that identifies and removes those errors. It will be the one who knelt and wept with me on that lonely afternoon. It will be siblings like you willing to see beyond the circumstantial mess and lend their belief to someone who cannot yet see it for themselves. It will be the books and readers volunteering their time and stories to the Human Library so that we all might learn to better understand, value and share with one another. It will be those who read our dialogues and come to believe that there is something truly unique about human beings, something not definable and not quantifiable and not mechanical and not merely biological, that is worth promoting, believing in, prioritizing and re-valuing. We can make this a better, more understanding, more equitable, more just world. We can remove the mislabelings and right the wrongs. AI can help in that process, when used as an effective tool and designed and deployed in participatory ways. But it will be humans who lead the way.
Copyright © 2022 Thinkverum.com - All Rights Reserved.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.