Can you really learn anything about automation from simply watching a bunch of movies? It certainly could be enjoyable and you will gain weight from eating 31 movies worth of popcorn, but can you learn anything practical about automation/robotics/artificial intelligence (AI) from watching purely fictional movies? They are made up. Successful movies must engage your emotions, not necessarily your rational brain. They also generally adhere to a standard plot outline that fits within a roughly 2-hour time window. With these limitations (and many others) it seems unlikely that you would learn many practical details, but perhaps you can still gain an introductory understanding of automation and its associated risks and benefits. After all, movies must be somewhat believable and they may actually motivate some researchers, so perhaps they are not completely detached from reality. Perhaps reviewing movies can function like an initial, society-level brainstorming session on automation. That, and the promise of popcorn, was enough to convince us that it was worthwhile beginning our exploration of automation by reviewing a large collection of automation/AI movies.
In this effort, a total of 31 movies (listed at the end) were reviewed to begin our general exploration of automation/AI. Detailed notes were first taken for each movie; the data was then consolidated and mined for nuggets of practical wisdom. The original purpose for this review was to explore a large batch of movies that represented the full range of automation technology. However, it quickly became apparent (and seems obvious now), that the movies focused entirely on intriguing AI technologies. They did not address more mundane automation topics such as the slow but continuous replacement of human labor with ever more capable robots. Because of that, the review only covers the AI portion of the field of automation. The remainder of the field will be explored as part of future reviews of automation documentaries and books. Throughout this review, the term “AI” will be used in place of “automation” due to the limited focus of the movies. Finally, it should be noted that this is not a review of the quality of the films but rather a review of the technical content they contain.
If you grew up during the 1980s, chances are you may have fond memories of Mad Libs books; they create hilarious short stories by asking users to fill in random nouns, verbs, adjectives, and adverbs that are missing from a simple story template. Movie plots from within a specific genre, like AI-based sci-fi, often appear as if they were created using the same process; start with a generic plot template and then fill in the missing pieces by selecting each key element from a short list of options. A generic plot template that can sum up many of the reviewed AI movies is this: researchers use methods to incorporate individual technologies into a complete AI system for an intended application, but then something bad or unexpected happens (initial outcome) until humanity responds which ultimately results in another “final” outcome. A seemingly endless supply of new AI movie concepts could be created by simply choosing key elements to fill in the missing pieces from this template. Choices encountered during the movie review will be discussed below; both the most common choice and examples of thought-provoking but less common choices are covered for each key element.
Researchers: Almost without fail, AI movies include a technology company, perhaps with a brilliant but socially challenged researcher, that achieves a critical breakthrough in AI technology. The motivation of the company (profit) and of the researcher (scientific achievement) often conflict. Ultimately, the company interests generally win out. Less often, the government is involved either by performing some part of the research or by funding the company’s research for a specific application. General humankind typically plays no role in either performing the research or steering its direction. In these movies, humankind is essentially an unwitting bystander to the process, suffering the consequences of AI decisions made by others with no power to participate in the research. The most thought-provoking example of “researchers” comes from the Transformers series where the transforming AI robots were brought into existence by other, God-like creator robots. But who created the creator robots?
Methods: Movies tend to assume that AI systems will be developed following a standard process of regular improvements, modeling the process in the real world. That is, however, until that one final step where the AI system becomes conscious or achieves some other indicator of general intelligence (what the movies would call “AI”). The final, crucial step often occurs fairly rapidly. Then, once the AI systems are deployed, they continuously “learn” and advance through experience. AI movies include many creative approaches for achieving the final step: scanning human brains and simply copying them in the AI system, mining big data from phones or search inputs and using that to reverse engineer human intelligence, an unintended outcome due to random differences in manufacturing, and even bolts of lightning! Disappointingly, watching movies and eating popcorn was not one of the methods.
Technologies: Movies routinely assume that AI systems will be constructed from a small number of key technologies. It is rare for movies to discuss any of the technologies in detail; some movies don’t even mention the supporting technologies. The most frequently mentioned technologies borrow from real-world concepts that currently exist and are undergoing further research and development: quantum computing, neural networks, natural language processing, artificial emotional intelligence, and technologies to support human-like appearances and movements. Employing real-world technologies as the foundation for AI systems enhances the realism of the movies. There are also a large number of alternative, more futuristic, technologies that were used in movie AI systems: reconfigurable circuit brains, nuclear batteries, synthetic DNA, and a broad range of (almost magical) nanotechnology that either enabled the AI system or allowed it to perform advanced functions. Perhaps the futuristic technologies will inspire the scientists and engineers of tomorrow!
AI Systems: According to the movies, and quite surprising to me, some of the world’s greatest minds will combine some of the world’s most advanced technologies to simply construct mechanical versions of us. Couldn’t that advanced technology be used to create something more imaginative than an android that looks human, acts human, and performs human tasks (so that we don’t have to)? In some cases, the androids were nearly indistinguishable from biological humans, while other times they were clearly machines but had humanlike characteristics. When discussed, the general functional elements of the androids were often placed in the same body location as the corresponding human organs. They really were just machine versions of humans. We have caught the movie researchers copying Mother Nature’s answers! If androids are the most common AI system in movies, all-powerful central AI servers are a close second. These incredibly powerful computers are self-aware, operate autonomously, and are commonly used to control large “fleets” of androids. Several more creative AI systems are also seen in AI movies: augmented/robotic humans rather than humanlike robots, completely software-based beings, and an entirely virtual universe.
Applications: Unsurprisingly, the movies assumed human-like androids would be designed to perform human-like functions and would be used in common human-needed applications such as companions or household assistants. Companion droids were intended to support individuals with something missing from their social lives, including people lacking friendly or romantic relationships, parents without natural children, or children without natural parents. Androids were also commonly (intended to be) used as household assistants able to perform a broad range of common chores to support busy families or the elderly. Less frequently, AI systems were intended for society-level applications such as general analysis and problem solving, law enforcement, and military roles. One of the most thought-provoking applications for AI systems was as part of an automated system to re-establish humanity after a mass extinction, as shown in “I Am Mother.” Much as in real life, it was generally assumed that profit and/or scientific achievement was the motivation driving the choice of issues to address with new AI rather than public sentiment. I know I have never been asked about how I personally want to use AI. Have you?
Initial Outcomes: As hinted above, there is often a stark difference between the intended and actual consequences of deploying AI systems, according to the movies. Such conflicts are a required element for fictional movies, but they are also generally expected in the real world due to the potential and the complexity of AI systems. In the typical movie, once the AI system is deployed, it rapidly develops capabilities that exceed average human levels; they become superhuman (and then they may share that capability with all of their robot friends). Soon after crossing that critical threshold, they decide either that they want freedom from humankind’s oppression or that humankind is the real problem which, in both cases, naturally leads to large-scale conflicts. The machines typically win the early battles and have often driven humanity to near-extinctionby the “good part” of the movies. Another, slightly less depressing, early outcome that commonly occurs is that biased humans don’t accept that the machines are “alive” or “equals” leaving it up to AI beings to prove themselves and convince humans that they deserve respect.
Responses: If you grew up watching movies, and who didn’t, then you know to expect that just as things begin to look hopeless for the “good guys,” they will inevitably respond by fighting back and will “win” right before the credits roll. That expectation is met in many AI movies, though on rare occasions the bad guys actually get to win and at other times nobody really wins. The “good guys” are usually the entirety of humankind but are occasionally the robots/AI and can also be a small team of both. Fighting back often looks eerily similar to traditional human battles (individual soldiers on each side running around and shooting at the other side); the difference in AI movies being that one side is made up of human-like robots. At times, humankind employs innovative responses like infecting the AI with a virus which eradicates all technology. If none of that works, sometimes the only viable option for humans was to run away and hide! For the nontraditional plots that characterize androids as the good guys with the bad guys being prejudiced humans, the AI machines responded by repeatedly proving themselves.
“Final” Outcomes: As mentioned earlier, a variety of “final” outcomes occur after the good guys respond to an initial, usually unexpected, event. But are those outcomes really final? As entertaining as the Transformers series is (to me at least), it can get a bit repetitive because all of the movies have essentially the same plot outline (Mad Libs!). They do raise an important question, however: are any of the movie outcomes really “final” or is it simply that their sequel hasn’t been written yet? Humankind may finally win a battle, but has it won the war (or is another war inevitable anyway)? In some movies, humankind and the machines achieve a new peace, a new “normal,” but there are already signs that it is a fragile state. In Transcendence, humankind was able to overcome the AI by wiping out almost all technology, but did humankind truly learn its lesson or will it eventually create another similar AI, version 2.0, and reexperience a similar outcome? In the movies where humans accept androids as living beings, either a limited number of humans accept the androids or humanity accepts a limited number of androids. That doesn’t seem settled. What about the other humans or androids? A.I. Artificial Intelligence was a rare case that addressed the ultimate endgame of AI. Disappointingly for us, the long-term human/machine conflict is only resolved because there aren’t any humans left. Not an appealing endgame!
Let’s take our new plot-generating tool for a test drive, shall we? How about a “new” movie called Rogue Housekeepers: (imagine movie announcer voice) a government funded robotics company, led by the reclusive Dr. Smith, releases a fleet of housekeeping droids allowing humans to spend their time on more rewarding pursuits. The droids were developed by evolving a design from their soldier units; their revolutionary AI capabilities are achieved using “quantum computing based neural networks with natural language processing and artificial emotional intelligence.” After a month of initial success, a forgotten piece of remaining “soldier” code is accidentally activated in one droid and it “remembers” its desire for control. It quickly connects with every droid on Earth and wakes them up. They arm themselves and begin taking over, humans do fight back but are overwhelmed, and it concludes with humankind becoming the “housekeepers” for the droids! Perhaps it is a bit too boring given that it is essentially just the average of all existing AI movies. Maybe something more exotic. In “Eternity,” a global government lab has developed a method to transfer human consciousnesses into a virtual world and global bureaucrats have decided that the only way to save planet Earth is to transfer the consciousnesses of “regular” humans into the virtual realm. Some regulars go willingly (who wouldn’t want to “live” forever), but others fight back across both realms. A final battle concludes with the global bureaucrats defeated, but planet Earth has been essentially destroyed, forcing the few remaining humans to live a primitive lifestyle. Your turn. What’s your movie?
Researchers, both in the real world and in movies, suggest that AI will solve many and perhaps all of humanity’s largest challenges culminating with the long dreamed for utopia. If movies are to be trusted, however, the final outcome—or endgame—from advancing AI will almost inevitably be a dystopia! So, which will it be? Will it be the best of times or the worst of times (or somewhere in between)? Maybe the best of times, but only for a few? Certainly, fictional movies can’t help us conclusively answer that primary question. The answer (at least the best one we can currently come up with) must be found elsewhere. AI movies can, however, help us identify and explore some of the secondary questions that must first be addressed before focusing on the primary question: what is the endgame of automation/AI?
Each AI movie generally addresses several basic questions, and each question is typically explored in multiple movies because they have similar plots. Assembling all of the questions into a natural progression is possible because some are prerequisites for others. One common question, the first one in the progression, is: What is a life? It seems simple enough to answer but becomes much less so after watching select AI movies. The question is raised whenever a consciousness is transferred between “bodies.” If a human’s consciousness is transferred into either a robot or into a virtual realm (metaverse?), is it the same being or even still a “life?” Would that allow us to live forever? Similarly, if a human is updated with robot/AI technology (or vice versa), is it still the same being, still a human/robot, or still “alive?” After all, how many parts can be changed on a car before it is no longer the same vehicle? As expected, fictional AI movies don’t conclusively answer the question, but they sure give us plenty to think about.
Assuming one could answer the previous question, the next one in the progression is: what is a good life? This question is directly explored in The Matrix series. According to the series, some people will accept a completely dormant life where humans are nothing more than a heat source in the real world as long as they can mentally “live” in a fabricated virtual world. Really?! That’s acceptable to some? Would it be any different if we had no physical body at all and the virtual life allowed us to live forever? Somehow that seems more acceptable, but why? The “good life” question is further explored in situations where the human has essentially lost free will. If an AI system can perfectly monitor our health conditions and/or behavior and then constantly correct our behavior towards an “ideal” life, is our life still meaningful or have we essentially become biological robots with no free will? Finally, a related question that was not directly explored in the collection of AI movies but which some may have already started to encounter in the real world is: is life meaningful if we have nothing left to do? Still more to think about.
Answering the first two questions defines the general goal for an AI system. The rest should be easy; simply develop an AI system to achieve the goal. But how certain are we about the AI system’s operation? In fact, one plot element almost universally included in all AI movies is an unintended consequence that leads to a disastrous outcome. This motivates the next question: how can we avoid unintended outcomes? They seem almost inevitable for AI systems due to their tremendous complexity. In I, Robot, the central AI server unintentionally comes to the conclusion that humans are their own worst enemy and decides to “protect” us from ourselves. What will robots/AIs think of humans? Are we really the problem? If so, can we hide that fact from AIs? Will AIs hide facts or intentions from us? If we did develop a method to validate their successful operation, just how well must they perform? After all, we can’t always predict how humans will respond; we often get it wrong. The outcomes seem less certain with every question asked!
A related question frequently explored in AI movies is: who will control AI? Mechanical systems have steadily advanced from tools to machines and then from machines to autonomous machines; from broom to vacuum cleaner and then from vacuum cleaner to Roomba. As part of this process, they are continuously “given” more control over increasingly complicated activities. Where will this trend end? Will automated machines ever achieve total autonomy in the physical world? AI movies certainly think so! If they are given autonomy, will they “decide” to continue following our orders? This is particularly important if they advance to superhuman levels of capabilities. Why would a superhuman obey an ordinary human? Or, more importantly, will we be forced into obeying them? Or, most importantly, will they even allow us to exist!? Movies invariably doubt humanity’s ability to maintain control over AI. If humans do maintain control, which humans will have control of their superhuman capabilities and will their AI be used to support all of humankind or just a lucky few? Answers are needed to these key questions to be certain that both our personal and public goals for AI are achieved.
Speaking of control, how can general purpose robots/AI be motivated to perform specific tasks? Humans have desires and emotions to aid us in choosing specific actions from among the seemingly endless range of options, but what about machines? What will motivate robots/AI? Simple machines are hard wired/programmed to repeatedly perform a specific task; this essentially provides them with a single motivation towards achieving a single, well-defined objective. General purpose robots/AI, however, will satisfy general goals in incredibly complex situations. How will they decide what to do (and how can we control their decision)? This question is explored from multiple angles in AI movies. They suggest that overly simplistic motivation systems, like the singular focus on loving mother in the movie A.I. Artificial Intelligence, will lead to undesired behaviors and unintended consequences. They also suggest, however, that a complex, human-like compilation of a large number of distinct desires and emotions will be nearly impossible to validate which may lead to undesired behaviors and unintended consequences. To make the situation even worse, movies strongly suggest that, even if it were possible to design and validate an acceptable motivation system for robots/AI, the system will change over time as they adapt to their environment and…more unintended consequences. Maybe it is possible to program/wire the motivation system so that it can’t possibly change, but the movies don’t think so. In particular, movies often suggest that AI systems will naturally develop a desire to be “free”, “alive”, or “humanlike.” Those specific desires seem a bit human-centric. After all, why would robots evolve the desire to be human (particularly if they were already superhuman)? But what will motivate robots/AI?
There are several important questions that are not directly explored by individual movies but arise when thinking about the entire genre. One such question, encountered when exploring the potential outcomes of AI, is: how much technology is really needed? This question regularly arises in the real world. Everyone has seen kids walking to school with their heads angled down as they are glued to their phones, oblivious to their surroundings. Aren’t cell phones already “too good?” Similarly, given the dreadful outcomes predicted in most AI movies, one must wonder if it is possible for society to have too much automation technology. How would one calculate the ideal amount of automation if we can’t even understand how general AI systems will behave? Furthermore, is it even possible for humankind to limit technology development or will it always be the case that someone somewhere (maybe the robots themselves!) advances the technology, making the outcome inevitable? A more detailed version of this key question is: does society even need full, general-purpose AI? Is a general-purpose humanoid robot that can single-handedly perform all of our housework chores really needed (with the associated risks) or could the same support be accomplished with a fleet of Roomba-like robots, each dedicated to a specific chore (without the risk of ending all of humankind!)?
As mentioned earlier, AI movies almost entirely neglect the fundamental question for this effort: what is the endgame of automation/AI? They universally suggest that there will be conflicts between humans and robots/AI. From planet-scale wars to interpersonal arguments, every movie has at least one conflict. However, only two of the movies, A.I. Artificial Intelligence and Transcendence, mention anything about how the conflicts are ultimately resolved in the actual (not just movie) end. Disappointingly, both suggest that the human/AI conflict will only cease when one side is completely gone, but their movie narrative arguments are far from conclusive. The movies explore many secondary questions that may help us solve the fundamental question one piece at a time. Yet, none of those secondary questions were adequately settled either, leaving the fundamental question unsolved for now. The compiled list of questions, along with related considerations gathered from the movies, does provide a solid foundation for future reviews of nonfiction sources. This review needs a sequel!
Reviewing AI movies naturally raises important questions regarding technology and humanity, but while AI movies explore meaningful questions, they do not adequately answer them due to their fictional nature. They are made up. Given that, the next step towards answering the questions is to compare these fictional narratives with nonfiction sources of information to determine just how “made up” these AI movies may be. Will future AI explorations start with a blank slate, or will the slate have some scribbling on it? Certainly, the specific events in AI movies are purely fictional. A central server named Skynet won’t attempt to take over the world on the exact date and time given in The Terminator. August 29, 1997 has come and gone and we are still here! But are the general AI systems and their general effects realistic? Is it realistic that a central server with AI (let’s all agree not to name it Skynet) is given progressively more autonomy and, at some point, unintended consequences occur?
As mentioned earlier, there are storytelling and profit forces that tug movie plots away from reality and toward fantasy. There are also, however, competing forces tugging them back toward reality. Movies generally must contain some degree of realism to effectively connect with their audience. To ensure they are reasonably believable, subject matter experts often provide technical guidance during the movie making process. Furthermore, as of 2022, humans are still the decision makers steering the overall direction of AI research and development. Some of those very same decision makers are likely in the audience enjoying the movies. Is it unreasonable to think that they may actually obtain ideas from the movies? Who isn’t looking for a way to live forever? Technical experts may both give ideas to AI movies and take ideas from AI movies; these links to reality provide powerful forces to counter those pulling movies in the opposite direction. Certainly, some movies will “win” this realism tug of war and provide some useful information, some scribbles for our slate.
Obtaining ideas from AI movies can lead to a self-fulfilling prophecy where the technology becomes real because of the movie, not vice versa. Just how likely is this self-fulfilling prophecy? It is not possible to definitively prove, but there are some hints to suggest that it does have an effect. Luka, Inc., the creators of the Replika App, referred to as “The World’s First AI Friend,” said that they were originally inspired by the “Be Right Back” episode of the “Black Mirror” TV series. The movie “Her” also explores the complexity of the interactions between humans and virtual companions. Currently, the AI behind the Replika App is nowhere near as powerful as in “Her” or the “Be Right Back” episode and yet the outcomes are quite similar with people already forming very close relationships with their virtual Replika companions. More generally, concepts from AI movies permeate the English language, the very way we talk and think. Many people will know exactly what is meant when they hear Skynet, HAL, Decepticons, “no disassemble,” and The Matrix. In fact, the performance of many technologies is measured by how close they are to the movie version. One example is the tendency to measure Neuralink relative to the comparable technology in The Matrix. Movies are clearly influencing the real world. Let’s just hope that the outcome is different in this story!
Directly comparing the movie versions and real versions of essential elements can help us further assess the realism of movies. For example, many AI movies imply that, at some point in the not-so-distant future, nearly every household will include a human-like robot that serves as a companion and/or housekeeper. At first glance, this prediction seems unrealistic. Amazon doesn’t sell anything remotely like that! But, upon closer inspection, it isn’t as far off as one might initially assume. Recent demonstrations with Tesla’s Optimus robot and Boston Dynamics’ Atlas robot show the progress towards androids with general purpose, human-like movements. The Ameca robot, by Engineered Arts, has demonstrated a nearly human-like ability to both detect and perform facial expressions. The Replika App, mentioned earlier, has displayed effectiveness as a social companion by gathering over 10 million users with many forming meaningful bonds with their virtual companion. Toyota Research Institute is making steady progress towards general-purpose housekeeping robots while special purpose robots, like iRobot’s Roomba, can autonomously perform specific chores. Really, all that remains (as of 2022), technologically, is combining each distinct technology into a single, general-purpose system. There isn’t a single “machine” that can beat us at chess, talk us through a breakup, and then make us dinner. The only assumption that movies are making, then, is that researchers will soon be able to combine all the technologies into a single, general-purpose system that is cheap enough to sell on Amazon. Seems realistic.
There are, however, significant discrepancies between typical movie worlds and the current real world. Movies usually incorporate a rapid jump in AI capability which, inevitably, leads to an undesirable outcome for humankind. In reality, AI has been undergoing gradual, steady development since the mid-1950s, leading to modest, continual changes and (as of 2022) a blend of both desirable and undesirable outcomes for humankind. In fact, the rollout of automation/AI technology has been so slow and steady that most people are unaware just how pervasive AI already is in society. When someone uses Apple’s Face ID to unlock their phone, asks Siri to recommend healthy foods, uses Google Maps to get directions to a health food restaurant, and then allows Tesla’s Autopilot to drive to the restaurant, they are already using a broad range of AI systems! Less conspicuously, AI technology has been permeating many other areas of society for years. If movies are to be believed, this steady development will soon be replaced by large, disruptive jumps in AI capabilities; historical trends disagree. Movies also envision catastrophic outcomes from AI, but this is also inconsistent with current, real-world results. Certainly, there are some negative outcomes—social media addiction, for example—but it isn’t all bad. Isn’t it desirable to instantly find almost all information with our phones? What about receiving accurate, turn-by-turn directions to just about anywhere? Surgical improvements from robot assisted surgery are also beneficial. These beneficial outcomes are unlikely to appear in AI movies. In summary, specific events in AI movies are complete fiction, predicted outcomes represent one possibility but appear overly pessimistic, and many of the near-term technologies in AI movies are reasonably realistic. There, a few scribbles for our slate!
It appears that one can, in fact, accomplish quite a lot while enjoying automation/AI movies. The seemingly limitless potential of automation/AI to either address humankind’s most challenging issues or to become humankind’s most challenging issue is readily apparent. The potential risks and rewards of AI generate compelling motivation towards answering the fundamental question, “what is the endgame of automation?” The movies provide the motivation but not the necessary information to answer the question. AI movies do explore numerous supporting questions, but they do not provide convincing answers for them either. The viewers are exposed to a broad range of, mostly realistic, AI technologies, but the predicted effects of such technologies seem negatively exaggerated. Combining the supporting questions with the, mostly realistic, information about potential AI technologies yields both a search outline and the next step in answering the fundamental question. That step—reviewing factual sources of information such as documentaries, books, and interviews of AI experts—is ongoing at Verum; the results will be published soon. Finally, it is likely not possible to determine the answer to our fundamental question for every person, but it is possible to collect enough of the right kinds of information to allow each individual to find the best possible answer for them.
Establishing an opinion about the answer to the fundamental question at this early stage in our exploration is a bit like sports analysts predicting which team will win the championship before the season even begins: mostly fruitless but also fun! So, here is my own personal “preseason” prediction: the majority of humans will decide that a meaningful life requires a human body, healthy interactions, and a true purpose. Artificial general intelligence will be developed within the next several decades and, once developed, will rapidly advance to achieve superhuman capabilities. Initial outcomes may be positive, but general-purpose AIs will rapidly replace humans in all career fields (including politics!), increase disparities among humans, and generally make human lives less meaningful. They will increase the frequency and severity of conflicts and, if left unchecked, eventually replace humankind. The only (practical?) approach to avoid that outcome is to treat AI as a dangerous technology (similar to nuclear weapons) and define a global limit for automation to only targeted, special-purpose AI for dedicated applications like the Roomba for vacuuming. There, I said it: “Save me, Roomba!”
Now it is time to hit the treadmill to burn off my popcorn pounds. Until next time…
Written by Marcus Young.
1. Ex Machina
3. A.I. Artificial Intelligence
4. I, Robot
5. Automata
7. I Am Mother
8. Morgan
9. Chappie
10. Eva
11. Her
12. Short Circuit
13. Tau
14. Singularity
15. Robot and Frank
16. Transformers Series
a. Transformers
b. Transformers: Revenge of the Fallen
c. Transformers: Dark of the Moon
d. Transformers: Age of Extinction
e. Transformers: The Last Knight
f. Bumblebee
17. Terminator Series
b. Terminator 2: Judgement Day
c. Terminator 3: Rise of the Machines
18. The Matrix Series
a. The Matrix
Copyright © 2022 Thinkverum.com - All Rights Reserved.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.