Zum Hauptinhalt der Webseite
Artificial Intelligence

How a black box is changing organizations

  • Thomas Schnelle
  • Tuesday, 6. December 2022
How a black box is changing organizations
© plainpicture/Lubitz + Dorner

AI is a driver of digital transformation. But can algorithms really take decisions away from us – or do they only control our decision-making behavior?

The upheavals triggered by digitization and artificial intelligence (AI) are enormous. We maintain that they do not arise from computational intelligence acquiring decision-making power beyond what humans have programmed into it, but from AI providing new resources of power and influence that are permanently changing decision-making processes.

A chatbot writing poetry

In the headquarters of the largest Chinese e-commerce company, JD.com in Shanghai, the machines never stand still. 200,000 orders are processed here every day, but on 40,000 m2 of work space you have to search hard for the four female employees at the JD headquarters. That’s because from ordering to shipping, robots have taken over the business at JD.com. They take down orders, pick goods from the shelves and lift them onto conveyor belts. The personal touch is provided by a chatbot that can supposedly write poetry; all it takes is a few personal details about the customer.

Sounds like science fiction? It’s already reality! AI systems are now used in almost all industries from agriculture to administration. Around the globe, entrepreneurs are asking themselves what other tasks AI could take over in their organizations in the future.

Skeptics and proponents alike agree that AI is the most powerful technology available to us today. For one thing, algorithms are capable of processing incredible amounts of data; for another, they can become ever more sophisticated themselves by constantly gathering empirical data, analyzing examples, and refining their evaluation patterns accordingly. Organizations now collect so much data that they can easily feed self-learning AI applications with sufficient training material. Nor is there any shortage of the computing power needed to process the huge amounts of data.

Need to pre-select shackling automated decision-making processes

But for all the love of algorithms, there is a blind spot in automated decision-making processes. Unfortunately, even the smartest AI cannot take true, autonomous decisions on our behalf. By our own definition, a defining characteristic of decisions is their contingency, i.e. the ability to choose among multiple options. A core element of decisions in organizations is thus to weigh up factors against each other and, if necessary, to freely add new factors that were not actually foreseen in the decision-making process. For example, an insurance company employee might spontaneously make the settlement of a claim dependent on how long the insured has already been a paying customer, a circumstance that is completely irrelevant for the assessment of the case itself, but can be quite significant for the company.

Artificial intelligence, however, cannot take autonomous decisions, but merely follows the more or less broad paths that programmers have laid out for it. Even the most prudent software designer can never anticipate, take into account, and program the multitude of environmental factors that affect organizations – and might well be met with deviations from the rules. In our view, this decisively shackles artificial intelligence.

Since algorithms are programmed by humans, they will also have the same blind spots and knowledge gaps.
Thomas Schnelle

Thomas Schnelle

The source code is also biased

What an AI system learns naturally depends on what it is fed – or rather, what data it automatically searches for. However, the selection of material for a neural network is just as subjective as a person’s decision in favor of a particular model of eyeglasses. Both result in perceiving things in a certain way.

Unfortunately, it is thus not very likely that AI-based decisions will become more rational and objective. But it is all the more likely that AI will massively influence decision-making behavior in organizations in the future, and thus redistribute power and influence.

Since algorithms are programmed by humans, they will also have the same blind spots and knowledge gaps. In this way, programs can emerge that reproduce racist or discriminatory decisions, though in doing so, the developers themselves do not have to consciously cultivate a racist attitude. All that is needed is some gap in their awareness of the problem. This is how it came about that touchpads were programmed to only work for people of light skin color, because the programs themselves work with light-dark distinctions. The programmers and testers did not notice the problem at first. Contact with people of color only occurred after the product was released.

In this case, it was not a self-learning algorithm. But the problems there are just the same, as humans write the code and thus decide what is to be observed and learned, and what is to be hidden. As AI is granted more and more decision-making power, the risk of reproducing social problems grows. “Machine bias” is the name given to this AI system vulnerability.

The AI black box and its decision-making (de)tours

An AI brain that sifts through and weights vast amounts of data is a black box whose decision-making processes are only comprehensible with great effort. We basically need another program that observes the AI decisions and displays them in a way that human users can understand.

Some big surprises do come to light in the process. For example, a group of researchers at the Heinrich Hertz Institute (HHI) in Berlin analyzed how neural networks used for image recognition work and discovered some astonishing things. For example, a software program that was supposed to recognize photos of horses did not rely on the content of the images, but on copyright information that pointed to horse forums. It is obvious to humans that there can be horse photos without copyright information or horse forum photos without horses – rare, but possible – and it is better to look if the shape of an odd-toed hoof can be recognized somewhere on the photo. But that does not mean that it is obvious to the software. Blurry photos, potentially obscured key features, and lack of contrast with the surroundings only increase the software’s error rate. So it orients itself to values that can be reliably recognized – characters.

What is true for image recognition is also true for machine learning in general. A program seeks the path of least resistance with the highest possible success rate. But what does this mean for AI-based decisions and their legitimacy and acceptance in organizations?

The AI decider has power

AI does not automatically lead to a higher degree of transparency or rational logic in organizational decision-making behavior, but primarily ensures a redistribution of power in decision-making processes. Precisely because AI is a black box for most people, it gives actors influencing resources that may not even have been in play before. This is because all those who program, train and analyze the algorithms (or have them programmed) – for example, employees who are responsible for the new technologies – are gaining power.

Anyone who wants to use data-based decisions as arguments in their favor is simply benefitting from the fact that AI’s decision-making paths are not at all clear to many people. Anyone who can credibly convey that they have penetrated the AI black box are obviously at an advantage. That is why an argumentative reference to AI in decision-making processes can overcome opposition and save further debate. After all, a computer makes decisions on the basis of data, and numbers are pretty unbeatable as a means of legitimation.

You could also say that anyone who has access to artificial intelligence will tend to have more power in organizations in the future, regardless of which algorithm is steering decisions.

At any rate in JD.com’s central warehouse in Shanghai, machines seem to have already taken over. I wonder how it feels to walk the halls as one of only four humans. As founder Richard Lui says, “I hope that one day we will be entirely run by robots and AI.” But as yet, his system reaches its limits at the latest at the sign for the next town because that’s where the autonomous truck can no longer drive on its own. Only humans are allowed to be at the wheel in cities.

Author
Thomas Schnelle

Dr. Thomas Schnelle

is particularly monitoring the impact of digitalization on pharmaceutical organizations.