Thinking

Oct 24, 2018

Fear versus Optimism (Part 3)

In my article 'Digitization and AI in 2 Worlds', I recently asked how we can rid the masses of fear and spread optimism. After Steffi has shared her ideas with us, also Minjoo thought about that and finds an elaborated answer.

By
Andrea Vorm Walde

As she as Creative Technologist always delves deeply into KI- and digitization-topics I was extra-curious to know her ideas, answer to my question. In summary her answer is ‘Overcome fear through knowledge’:

“The philosophical debate in 1991 arguing that the Gulf War did not occur was a milestone in modern intellectual history. Jean Baudrillard claimed that “The Gulf War on CNN and other media is merely a “simulacra” symbols, and it has nothing to do with the actual war in Iraq”. In other words, the image created by gathering the symbols gradually becomes farther away from the reality, and the image constitutes the world of 'hyper reality' which replaces reality.

Thus, we can not really tell what happened in Iraq, and it is the same thing that did not happen. Baudrillard's claim was that the power of the editorial and interpretation of the media prescribes our reality. Baudrillard has even made McLuhan's claim that "media is a message" and "massage" at the same time.

As Andrea mentioned in her earlier editorial, the impact of media nowadays is too powerful that it sometimes overshadows the reality and easily mislead general public to construct a “wrong (or at least not falsifiable) opinion”.

However, one experiment conducted by some students from Oxford Internet Institute shows an interesting result. In December 2017, they asked people to watch various sci-fiction films about AI and later asked their opinion towards the internet, their experience about AI, and their willingness to automate their specific tasks in everyday life.

Surprisingly, no matter what type of world the movies depicted (Utopian or Dystopian), watching movies just polarised participants' attitudes. Optimists became more eager to integrate AI into their everyday life, and skeptics became even more protective towards AI technology. In other words, they only take relevant information about AI in a biased manner to support their predefined attitudes. This indicates that we might witness more polarisation of the opinions as people are exposed to more media sources.

The main reason for the fear of AI results – as far I see – from the complexity of the AI (or machine learning) technology. In the floods of media annunciating the advent of singularity points and AI taking over jobs of human, it is easy to build fear upon it as it is human nature to distrust what they don’t understand.

Then what could be the viable measures to make people have a balanced view on AI?

AI-created stupidity reduces public fear

Most of the current media reports on AI are highly focused on sharing the technical progress within the area, and not many reports are brave enough to show the mistakes that machine makes. If you had been in the technology area, you might now how demonstration goes. The media takes the “best shots” from the numbers of demonstration trials (though most of them might fail to properly function).

A few months ago, I came across this video collecting all the funny mistakes that robot makes. The video shows all the scenes behind the perfect demonstrations that receive the media spotlights:

The robots were constantly making mistakes, and that strangely made feel the robots (even most of them just have moving arms without a face) cute and friendly as if my clumsy friend.

Group of scientists from the Centre for Human-Computer Interaction from several universities in Europe actually claimed that they have found the proof that people are more likely to prefer robots if they behave awkwardly and make mistake regularly - providing more human-like features. In their research, published in the Frontiers in Robotics and AI journal, shows that humans are not convinced by flawless robots and this is something that people fear.

In that sense, the researchers or AI enthusiasts should publish more content including non-successful demonstration cases so that more people may wash out their innate fear and feel more close to clumsy robots around us.

Create understandable AI

To build trust to AI system, there should be extra effort to improve literacy to the non-algorithmically literate individuals. Bloomberg’s current report stress that Understandable AI is the domain of UI/UX designers and product developers in collaboration with AI engineers and data scientists. Of course, AI driven solutions should embed “user-first” principles behind.

An understandable AI also invites people to be a part of the decision-making process. How data is collected, and AI reaches to the certain decision should be explicitly explained. A group of researchers recently found providing people a bit of control over algorithms can help build trust in AI predictions. (More information here)

Increate AI technology-related life contents

Lots of lectures dealing with AI is open to the public but still, are predominated by people involved in the AI technology or very special general public having a keen interest in AI (AI enthusiast). To fill the ever-growing discrepancy between the general public and specialists’ understanding of AI, terminologies or theories behind Artificial Intelligence should be reinterpreted to the plain words or be combined with more life-related examples.

Youtuber Siraj Raval is one of the pioneers in this area to create lots of fun AI-related content through his Youtube channel. He not only creates entertaining AI contents described in most plan terms but also introduces AI applied in various disciplines.

I listed some of my opinions (through the eye of interaction technologist in Indeed), and would like to hear more ingenious opinions from our colleagues!”


And she will (as I already hear some loud thoughts from some others)… I myself am not sure, if clumsy robots will spread optimism, but I will ask Heiko in the next blog post coming soon. Maybe he finds a different answer.

In the meantime there are some further sources from Minjoo for you:

http://www.futurekorea.co.kr/news/articleView.html?idxno=23105
https://www.oii.ox.ac.uk/blog/ai-trust-and-ai-fears-a-media-debate-that-could-divide-society/
https://internetofbusiness.com/scientists-humans-awkward-robots/

https://www.bloomberg.com/professional/blog/explainability-enough-need-understandable-ai/

Andrea Vorm Walde

PR and Communications

Andrea takes care of our PR and likes to show how innovative and versatile Indeed is. Actually, she is responsible for the words - nevertheless, she usually leaves us here first because we as designers and developers want to report authentically in our blog. And that’s what we can do best ourselves.

The
Mensch


click here for more:
NEWS, BLOG, EVENTS & THINKING