There is no larger gap than the one between expert knowledge and broad public opinion concerning AI. While some immerse themselves in the ever new possibilities enthusiastically, the rest is painting out horror scenarios. When will the general public finally get the chance to become part of the digital optimism?

Andrea´ s thoughts:

It’s Thursday, 9 o’clock. I am sitting in the lecture hall of the culture center ‘Kampnagel’, visiting the digitization fair ‘’ and I am listening to a very rousing opening speech held by the Senator for Culture and Media of the City of Hamburg. He talks about AI and automation and focuses on the topic of humanity. He mentions the fear many people have in the face of proceeding digitization and appeals imploringly for us to counter this fear with optimism and with passion for new challenges and change. He places the emotions at the centre of attention, emphasizes the relation to cultural aspects (interesting that in Hamburg the topic of digitization is attached to the department of the culture and media senator) and speaks about courage. All that happened in front of a specialist audience consisting of a few hundred IT experts and industrial partners.

3 days later. It’s Sunday, 8.15 pm. I am watching Germany’s most successful weekly crime-series ‘Tatort’. And I notice that this week’s killer is a robot. I see dark visions, an annoyed, almost aggressive and choleric inspector prophesying doom regarding machines and the future of humanity. A robot kills. And on TV they clearly state that the legal means of prosecution are still lagging behind the new technologies. This message reaches a quarter of all TV viewers, more precisely 8.1 million people. No experts, but rather ‘John Doe’.

How will the courage and the necessary openness for a change, which the majority can not yet estimate, reach the public? What can experts do to lead AI away from a science-fiction image and towards a realistic consideration? How can they assist people recognizing Artificial Intelligence as a new industrial revolution and not as a black scenario hovering above us all?

In the case of the ‘Tatort’, the German tabloid ‘Bild’ had intervened: The very next day they published an interview with Philipp Thesen, professor of human-system interaction in industrial design at Darmstadt University, and had the murderer relativized. Unfortunately, they cut shorted this interview so significantly that the article rather stirred up fear even more…

But either way: One short article in the ‚Bild’ is not enough anyway. People are not just about manslaughter and the loss of their jobs, but it is the big unknown that threatens. We tend to be afraid of what we do not know. That means: Clarification is needed urgently! But how to get it?

If politics focuses on the professional audience and the media feasts at the horror, I think those are in demand who are already far ahead of the masses. Those who work daily with and for AI for years. Those who are immersed in the topic long ago. Those who develop, advise and regularly meet others who know even more. Those who are speakers on the subject themselves, those who are part of what’s going on.

My own field is public relations, there are others at INDEED who are much more competent in the topic of AI. Hence I need your support, dear colleagues! You are the designers, engineers, innovation managers, business strategists and creative interaction technologists: what can we do to get people on board? How can we take away fear and develop confidence instead and therefore make digital innovation more ‘human’? I am looking forward to your input and pioneering answers!

Steffi´s Thoughts:

“First of all we should note that this fear is completely normal. Looking back to the implementation of earlier new techniques, it feels familiar: in the beginning film screenings were a curiosity and the first photographs were suspected to capture human being’s soul. (Editor’s note: This belief is still widespread in cultures without our standard of technology. The camera of a tourist at the wrong place is not less frightening than the unrealistic ideas about AI is to the uninformed population of industrial countries.)

In the early days of industrialization, the first format wars began: Edison and Westinghouse fought desperate battles for the better power system with sometimes abstruse episodes and demonstrations to convince humanity. Today we face the same mechanism, fear and fascination when dealing with robots like Sophia or language assistants like Alexa and Siri.

Somewhere between Muppet Show and Terminator Panic we become familiar with parts of Artificial Intelligence. What I find exciting is that we already trust the Google image recognition, our brake assistants or the fraud detection of our credit card without reacting with doomsday panic. (Editor’s Note: Perhaps because we completely repress the thought that this already is Artificial Intelligence…)

I believe that the majority of our population already has got the same magical relationship to algorithms, machine learning or robotics as they have to the power from the socket. Almost none knows how it gets there exactly or which requirements are needed. But we all have learned to use it and to take advantage of it instead of continuing to live by candlelight. Of course, this is primarily pragmatism, but this pragmatism often mates with optimism. Usually the fear only comes up when we do not see any benefit for us or when we are afraid of getting cheated. Then we feel like the weavers after development of steam engines, like the chandlers after discovery of electric light or like horses after the invention of the car.

In my opinion, we will only then generate more optimism when we will demonstrate the practical benefits of AI and proactively state weak points. Instead of scaremongering via horror scenarios as in the ‘Tatort’ Andrea mentioned we must enlighten. We have to find equivalents to explanations like “Do not pick the toaster with a fork, otherwise you will get an electric shock” for the topics of Artificial Intelligence. And that enlightenment will only work through discussions and conversations.

Through conversations of professionals with their families, of interested people with bystanders, of institutions with citizens. In concrete terms this can be a lecture at the college for example, as the colleagues and I hold them sometimes. Heiko, for example, recently talked about ‘AC/DC’ during a dialogue at Münster University of Applied Sciences and answered the students’ questions afterwards.

Even our fireside chats (recently accorded to the title ‘A Toast to the Art of Artificial Intelligence‘) reach some people, who are not that deep into the topic and then take a new or an expanded perspective. Such as Sven-Olaf of crowdmedia.

Works like the following are perfect for further enlightenment, too, as they are accessible to even more people and also a great opportunity to convert challenging content to comprehensible one: Visualization is very important in order to take away fear and to prevent consolidation of wrong ideas.

All this is not that different to any change management process: once the problem is known, you can work on understanding and solutions. And then you can take the necessary measures. Maybe it will not run properly,
but ¯ (ツ) _ / ”

Minjoo´s Thoughts:

In summary: ‘Overcome fear through knowledge’:

“The philosophical debate in 1991 arguing that the Gulf War did not occur was a milestone in modern intellectual history. Jean Baudrillard claimed that “The Gulf War on CNN and other media is merely a “simulacra” symbols, and it has nothing to do with the actual war in Iraq”. In other words, the image created by gathering the symbols gradually becomes farther away from the reality, and the image constitutes the world of ‘hyper reality’ which replaces reality.

Thus, we can not really tell what happened in Iraq, and it is the same thing that did not happen. Baudrillard’s claim was that the power of the editorial and interpretation of the media prescribes our reality. Baudrillard has even made McLuhan’s claim that “media is a message” and “massage” at the same time.

As Andrea mentioned in her earlier editorial, the impact of media nowadays is too powerful that it sometimes overshadows the reality and easily mislead general public to construct a “wrong (or at least not falsifiable) opinion”.

However, one experiment conducted by some students from Oxford Internet Institute shows an interesting result. In December 2017, they asked people to watch various sci-fiction films about AI and later asked their opinion towards the internet, their experience about AI, and their willingness to automate their specific tasks in everyday life.

Surprisingly, no matter what type of world the movies depicted (Utopian or Dystopian), watching movies just polarised participants’ attitudes. Optimists became more eager to integrate AI into their everyday life, and skeptics became even more protective towards AI technology. In other words, they only take relevant information about AI in a biased manner to support their predefined attitudes. This indicates that we might witness more polarisation of the opinions as people are exposed to more media sources.

The main reason for the fear of AI results – as far I see – from the complexity of the AI (or machine learning) technology. In the floods of media annunciating the advent of singularity points and AI taking over jobs of human, it is easy to build fear upon it as it is human nature to distrust what they don’t understand.

Then what could be the viable measures to make people have a balanced view on AI?

AI-created stupidity reduces public fear

Most of the current media reports on AI are highly focused on sharing the technical progress within the area, and not many reports are brave enough to show the mistakes that machine makes. If you had been in the technology area, you might now how demonstration goes. The media takes the “best shots” from the numbers of demonstration trials (though most of them might fail to properly function).

A few months ago, I came across this video collecting all the funny mistakes that robot makes. The video shows all the scenes behind the perfect demonstrations that receive the media spotlights:

The robots were constantly making mistakes, and that strangely made feel the robots (even most of them just have moving arms without a face) cute and friendly as if my clumsy friend.

Group of scientists from the Centre for Human-Computer Interaction from several universities in Europe actually claimed that they have found the proof that people are more likely to prefer robots if they behave awkwardly and make mistake regularly – providing more human-like features. In their research, published in the Frontiers in Robotics and AI journal, shows that humans are not convinced by flawless robots and this is something that people fear.

In that sense, the researchers or AI enthusiasts should publish more content including non-successful demonstration cases so that more people may wash out their innate fear and feel more close to clumsy robots around us.

Create understandable AI

To build trust to AI system, there should be extra effort to improve literacy to the non-algorithmically literate individuals. Bloomberg’s current report stress that Understandable AI is the domain of UI/UX designers and product developers in collaboration with AI engineers and data scientists. Of course, AI driven solutions should embed “user-first” principles behind.

An understandable AI also invites people to be a part of the decision-making process. How data is collected, and AI reaches to the certain decision should be explicitly explained. A group of researchers recently found providing people a bit of control over algorithms can help build trust in AI predictions. (More information here)

Increate AI technology-related life contents

Lots of lectures dealing with AI is open to the public but still, are predominated by people involved in the AI technology or very special general public having a keen interest in AI (AI enthusiast). To fill the ever-growing discrepancy between the general public and specialists’ understanding of AI, terminologies or theories behind Artificial Intelligence should be reinterpreted to the plain words or be combined with more life-related examples.

Youtuber Siraj Raval is one of the pioneers in this area to create lots of fun AI-related content through his Youtube channel. He not only creates entertaining AI contents described in most plan terms but also introduces AI applied in various disciplines.

I listed some of my opinions (through the eye of interaction technologist in Indeed), and would like to hear more ingenious opinions from our colleagues!”

Here are some further sources from Minjoo for you:

Post scriptum Dec, 2018:

The series came to an end. Read the thoughts of Karel, Heiko and Tommaso here and Michael´s contribution “self-learning idiots“.

Andrea vorm Walde Profile image

Andrea vorm Walde


Andrea takes care of our PR and likes to show how innovative and versatile we are.

Join our newsletter and stay informed about our latest work and thoughts

* means the field is Required

This field is for validation purposes and should be left unchanged.