Dec 4, 2018

Self-learning idiots

Our TOI-colleague Michael Leitl explains, why we don’t have to be afraid of AI but must fear the users. And thus, forms the conclusion of our blog series "Fear vs. Optimism".

Michael Leitl

Today, computers often amaze us with magical manifestations. The retailer calculates his demand for stock items with an AI. That works so well that goods ordered for stock are already sold while being on the way to Otto. The operator of a New York-based Harley Davidson store used an AI to get new customer contacts. He received over 1000 percent more new prospects. Google enhances badly exposed images via AI. AI supports doctors scanning tomography and radiographs for malignant tumours – with impressive success.

As fascinating as these success stories are, they also create fear. People fear for their existence, for society or for the survival of humanity. Depending on the study, more or fewer jobs fall victim to automation by AI. For example, in the UK 7 million by 2037, according to a study by PricewaterhouseCoopers. On the other hand, around 7.2 million new jobs will be created in the same period: in the health, research and education sectors.

So, what do you really have to be afraid of? In front of the machines? Of the change in your own biography? Or maybe of those who use or misuse the new technology for their own purposes?

Facebook relies on algorithms to increase the retention time of its readers in the stream. As a result, populist news is increasingly flushed up the streams. Messages that polarize are shared and attract attention – users stay on the platform. The mechanism has taken advantage of different groups: In Myanmar, human rights organizations see ZDF documentation as the cause of the bloody conflict in the growing incitement on Facebook – fuelled by the algorithm. (Source: mit-der-privatsphaere-102.html)

China relies heavily on collecting, combining and reconciling data. The  country's leadership has recognized that the potential of AI to manage complex processes can eliminate the inefficiencies of authoritarian state systems. What makes AI the blessing of production processes might become a curse for supervised citizens of dictatorships. (Source: Institute of Innovation and Technology "Good AI, bad AI?")

It depends on what you make of it

KI itself is neutral. It has no value system, but works like a megaphone. Very vividly, this is shown by the experience with the self-learning chatbot Tay of Microsoft, who became a racist. Same with the image recognition of Google, which proved to be discriminatory or the AI-based human resources search from Amazon, which systematically curtained the applications of women. (Sources:,,

Learning systems reinforce existing patterns in the datasets that they learn from, and transform them into work rules. They are guided by these rules and systematically apply the now programmed prejudices. The examples show that learning machines are a powerful tool. You can make life more interesting, increase our possibilities and simplify everyday life. However, in the wrong (read: biased) hands, they also have the potential to destroy our society.

Media literacy as a duty

We must therefore learn to be suspicious of the results of these Giga calculations. The more complex the systems, the more likely it will be that we will face unexpected and unwanted side effects. In order to be able to recognize and take countermeasures, in-depth media literacy is as important as learning basic arithmetic.

So far, AI is no more than an island, an artificial, hard-working idiot. We will have to wait a long time for real creativity in the sense of inventions that are driven by passion, fear or maniacal genius. We get excellent pattern recognition based on existing data. The results are copies or extrapolations of the past - not futuristic originals. The art expert Hanno Rauterberg comes to this conclusion in his statement on the auction of an AI painting at Christie's: "It is the thousandth adaptation of certain portrait patterns ... The picture is much more boring than the history." He rates it as "relatively irrelevant". (Source:

Read all posts from this series: FEAR VS: OPTIMISM

It all started with Andreas blog post, where she asked “When will the general public finally get the chance to become part of the digital optimism?”. The first pragmatic answer from Stefanie. Followed Minjoo with a creative coder point of view and Tommaso’s contribution. In between Karel had his say as well as Heiko. He stressed the responsibility of design and beliefs that proper “education” might be key.

Header pic by Raj Eiamworakul on Unsplash

Michael Leitl

AI Evangelist

Studied chemistry in Reutlingen, was a long-time editor at “Harvard Business Manager”, cofounder of the Newsplatform Pocketstory and Member of the Innovationteam at “Der Spiegel”: Michael brings a wealth of knowledge and the journalistic point of view to the team of Tools of Innovators and INDEED Innovation.


click here for more: