Humans and AI Should We Describe AI as Autonomous Backgroun V3 1

Humans and AI: Should We Describe AI as Autonomous?

March 10, 2021
by
· 6 min read

Beware the hype about AI systems. Although AI is powerful and generates trillions of dollars of economic value across the world, what you see in science fiction movies remains pure fiction. In this blog post, I will focus on the use of the word autonomous, the dangers of using it with stakeholders, and, in the context of customer experience, the inaccurate perception that all things can be automated, eliminating the need for interactions between employees and customers.

According to the dictionary, autonomous means “having the freedom to govern itself or control its own affairs.” To have autonomy is to have the freedom to exercise self-determination, to rule oneself, to make decisions in accordance with one’s own goals, without external interference.

Contrast the dictionary definition with how the word is used. When I google autonomous, autofill suggests search phrases such as autonomous vehicles, autonomous driving, autonomous weapons, autonomous chair, and autonomous delivery. The surprise in this list is autonomous chair. It turns out there is a business called Autonomous that sells office furniture on its website, autonomous.ai. I quickly tagged that match as a false positive.

unnamed

The Department of Defense Directive (DODD) 3000.09 defines lethal autonomous weapon systems as “weapon system[s] that, once activated, can select and engage targets without further intervention by a human operator.” In this usage, human-out-of-the-loop AI decision-making is labeled as autonomous. This is consistent with autonomous vehicle for driverless cars, a usage that is broader than the dictionary definition.

Is AI Autonomous?

Is autonomy a realistic promise or is it simply marketing hype?

The current generation of AI systems is powered by machine learning, a technology that involves learning by example rather than waiting for humans to manually code rules into a computer system. The process of creating and using the machine learning algorithm that powers your AI’s decisions requires a data scientist to:

  1. Prepare relevant learning examples.
  2. Set the goal to be achieved or optimized.
  3. Fit pattern-matching algorithms.
  4. Review the fitted patterns and outputs, iterating to ensure desired behavior.
  5. Deploy the machine learning model into production.
  6. Use MLOps tools and practices to define and monitor key performance indicators and manage system health.

To get from machine learning to an AI system, you need to broaden step four to convert machine learning predictions into decisions (for example, to offer product X to customer Y or to flag a transaction as fraudulent).

At no point in this process does the AI system get to choose its own goals or make decisions without human governance. The AI system is just a computer system, a tool to be used by humans. It is designed by humans, built by humans, managed by humans, with the objective to serve human goals. You don’t have to negotiate with an AI system to get it to do its job. AI systems are autonomous to the extent that they can make individual decisions without direct human interference in the moment, and, critically, without human capability to intercede, to interrupt, or to modify.

The dictionary definition of automated is “operated by largely automatic equipment.

If we ignore the hype, your AI system is automated, not autonomous.

The Danger of Describing Your AI as Autonomous

Recently published research papers show the danger of describing your AI systems as autonomous.

In Attributing Blame to Robots: I. The Influence of Robot Autonomy, the authors report the results of an experiment that examines how humans attribute blame to humans, non-autonomous robots, autonomous robots, or environmental factors in scenarios in which errors occur. In the experiment, the study participants are presented with a set of scenarios in which a task fails. Different scenarios were written to emphasize the role of the human, the automated system, or environmental factors in producing the task failure. In some but not all scenarios, the automated system was described as autonomous. 

The results of the study showed that humans attribute the most blame to humans, followed by autonomous systems. They attribute almost equally low levels of blame to nonautonomous systems and environmental factors. The authors concluded “that humans use a hierarchy of blame in which robots are seen as partial social actors, with the degree to which people view them as social actors depending on the degree of autonomy.” Noting that human acceptance of AI systems is a function of the attribution of blame when errors occur, describing an AI system as autonomous results in more attribution of blame and lower acceptance of that AI system. In other words, your employees and customers will be more satisfied if you don’t describe your AI as autonomous.

Autonomous AI and Accountability

In another research paper, Automation and Accountability in Decision Support System Interface Design, the author concludes:

when developing human computer interfaces for decision support systems that have the ability to harm people, the possibility exists that a moral buffer, a form of psychological distancing, is created which allows people to ethically distance themselves from their actions.

It seems that the way humans react to the word autonomous decouples accountability from human operators and anthropomorphizes a system. Humans start to treat the AI system as if it were a human too, morally and legally accountable for the decisions it makes. 

There are legal precedents that hold humans responsible for the acts of AI systems. In one example of anthropomorphization, the AI system Dabus was listed as the inventor in a US patent application. Dabus designed interlocking food containers that are easy for robots to grasp and a warning light that flashes in a hard-to-ignore rhythm. The Dabus creator argued that because he had not helped it with these inventions, it would be inaccurate to identify himself as the inventor. However, courts ruled that only natural persons can be inventors. Other legal precedents ascribe legal liability of automated systems to either the user, the vendor, the programmer, or the human expert providing advice upon which the system was designed. The notion of autonomy cannot be used as a get-out-of-jail-free card for human accountability.

In general, AI systems are not autonomous. Even in the rare cases in which they do make individual decisions with autonomy, human designers, operators, and system administrators are still accountable. Talk about autonomy distracts us from that fact.

The Brand Value of Humans

Another research paper, The Labor Illusion: How Operational Transparency Increases Perceived Value, reports the results of an experiment that examines whether customers value a product or service differently, depending on the level of human effort they perceive went into it. The experimenters simulated experiences in online travel and online dating, varying the time people waited for a search result. The experimenters also varied whether the participants were shown the hidden work that the website was doing while they were waiting for results. The results showed that, no matter how long people waited for a result, they considered the website valuable when it signaled the effort it exerted. The research participants also reported more willingness to pay for the services, a perception of higher quality, and a greater likelihood to use the site again. This effect is referred to as operational transparency.

Related research, such as Social Traces of Generic Humans Increase the Value of Everyday Objects, shows the effects of the visibility of human effort and human involvement on the perceived value of a product or service. In their experiments, the researchers demonstrated that the perceived value of goods increased when they carried a label to indicate the goods were made “by people using machines” (rather than just made by machines). The researchers concluded that the “results suggest that generic humans are perceived positively, possessing warm social qualities, and these can ‘rub off’ and adhere to everyday objects increasing their value.” Other related research has shown that people prefer the taste of food when they see another person make it. They value an experience in a massage chair more when another human is operating the equipment. Your customers will appreciate your AI-powered products and services more when they see that your human employees designed and managed the AI system.

Humans and AI Best Practices

Given the results of the research, the best practice for optimizing your customers’ AI-powered digital experience is to clearly communicate that your AI systems are designed, operated, and managed by your employees. You improve their customer experience, their acceptance of AI technology, and the perceived value of your products and services.

Other recommended best practices include:

  • Clearly defining your ethical values and practices for AI usage.
  • Using model validation workflows that explain how the AI behaves so that business subject matter experts can understand and validate them.
  • Using MLOps tools that enable your employees to manage AI systems to ensure that they remain healthy and continue to achieve your business goals.
Ebook
A Guide to Human-Centered AI

Best Practices in Organizational Change and Customer Experience

Download Now
About the author
Colin Priest
Colin Priest

VP, AI Strategy, DataRobot

Colin Priest is the VP of AI Strategy for DataRobot, where he advises businesses on how to build business cases and successfully manage data science projects. Colin has held a number of CEO and general management roles, where he has championed data science initiatives in financial services, healthcare, security, oil and gas, government and marketing. Colin is a firm believer in data-based decision making and applying automation to improve customer experience. He is passionate about the science of healthcare and does pro-bono work to support cancer research.

Meet Colin Priest
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog