According to a recent article, machines may soon live (dictionary definition), but will not really “live” (colloquial definition). Machines may soon be self-sufficient and capable of creating more machines, but they will not live in the same way that humans “truly live.” But, why even ask the question “will machines ever live?” Is it just nerdy curiosity or something more? Artificial intelligence (AI) is everywhere in the news. Some experts suggest AI will soon solve many of humanity’s problems while others suggest AI will create new problems for humanity (and possibly cause its demise). The experts don’t agree. They do agree, however, that we should pay attention to AI as it will heavily impact humanity for the foreseeable future. How can we form opinions about AI if the experts don’t even agree?! The method used in this study, divide and conquer, divides primary question into a sequence of smaller, more manageable ones and then consolidates the results. “Will machines ever live” is a smaller, more manageable question supporting the primary one for the overall study “what is the endgame of automation?” It is one piece of the puzzle.
What does this piece tell us about the overall AI puzzle? In the preceding article, the author suggests that one reason machines will never “live” (colloquial definition) is that they will alwayslack empathy; it was stated that AI is just math limiting its ability to truly connect with humans. This suggests that there is hope for humanity as machines may become powerful, but will never be able to perform all of the common human tasks. Surely, an important observation and a piece of the puzzle. I both agree and disagree with the premise. I agree that AI is and will always be “just math.” I disagree with the assumption that the human brain isn’t just math at its core. Advancing AI will help us understand the human brain and verify if this is the case. I also feel that it is a bit unfair for machines to be required to match the highest known level of mental abilities (human level) to be considered “living,” when there are many beings that are considered living without brains at all. Even if machines don’t “truly live” (colloquial definition), is that really the most important consideration, the reason the question was asked? I suggest that the most important consideration when discussing the endgame of automation is the dictionary definition of life: will machines ever be self-sufficient and capable of making more machines? Machines that meet the dictionary definition can fully detach from human society; a required piece for many of the most extreme possible endgames of the AI puzzle.
The endgame of automation, what eventually happens in the trend towards ever more powerful computers and machines, depends on power andcontrol. The endgame depends on how powerful AI/machines become and who controls that power. There is broad consensus that AI has the potential to become incredibly powerful. Whether incredibly positive or incredibly negative outcomes then occur depends on who controls that power. Will humans control machines or will machines control humans? If machines become self-sufficient and capable of producing more machines, or “alive” by the dictionary definition, maintaining control over machines becomes significantly less likely and negative endgames become significantly more likely. That, in my opinion, is why the question was asked. Answering that question by itself does not allow us to accurately predict the future of AI. It does, however, begin to provide some clarity as to what the future may look like. It also provides important milestones to watch for as the future unfolds: self-sufficient machines, machines that can make more machines, and machines that are not under human control.
With many of the puzzle pieces for the fundamental question still missing, it is impossible to make definitive statements about the endgame of AI/machines, but some general observations are possible. As I personally assemble my current pieces, they fit together something like this: machines will likely live (dictionary definition) someday soon; humans will then likely lose control over at least some machines; once free from human control, machines will likely pursue some of their own objectives; their objectives will likely conflict with those for humanity; and at least some of the negative potential outcomes of AI are made more likely with “living” (dictionary definition) machines. If that is the case, then what should we or even what can we do about it as individuals? That is the next piece of the puzzle that will be addressed with the future discussion topic “What Can We Do?” in October. That is how the puzzle pieces are starting to assemble for me. But, the purpose of our study is to help others assemble their own. How do you fit your pieces together?
Copyright © 2022 Thinkverum.com - All Rights Reserved.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.