Interview

Nov 8, 2018

Fear versus Optimism (Part 5)

We continue with our series. This time Andrea interviews Heiko about how to spread optimism instead of fear facing AI.

By
Andrea Vorm Walde

Andrea: Hello Heiko! Thanks for taking the time to think about my question. Your thoughts are important to me as you are frequent speaker at conferences and who is therefore in contact to ‚the people on the streets/out there’.

Heiko: True. However, this is more a specialist audience than people on the streets. And there are usually two camps: those who deal with morality, ethics and society are questioning the topic. The business audience in contrast sees immense business potential, that is assumed to be implementable directly with AI and to produce cash immediately.

Andrea: Would you agree that you, the creatives, bear some responsibility?

Heiko: It is a matter of fact that everything what happens within AI development is driven by people. And exactly this is what the fear is based on – namely that people who want to do something bad, no matter how each one defines it, teach the AI to do those 'evil things'. We creatives help shaping and developing progress and innovation and of course it is up to us to constantly create "the good" and to be "good teachers" for AI. And there are a lot of good developments in this area yet.

Andrea: Could you give us some examples?

Heiko: Yes of course: Aipoly for example has started with real time object recognition for visually impaired people and now continues to expand into trade: https://www.aipoly.com/
Waverly will support our communication: They are developing in-ear buds that translate almost in real-time so that people with different languages will be able to talk to each other quite naturally: https://www.waverlylabs.com/

Andrea: This sounds promising! And indeed, these are examples for new technologies being a huge help for humans.

Heiko: It is equal to all new stuff: we designers do not create a great app just because it is in demand and/or feasible. But we design for a specific purpose. And that is crucial: you can use a rocket as a nuclear weapon and kill people. But it can also be used as a carrier of a space capsule with which astronauts explore the universe and new worlds. A good purpose will produce good Artificial Intelligence. The only thing we really need to be afraid of is ourselves.

Andrea: But I feel that it is more the fear of technology what comes to my ears.

Heiko: It transcends our beliefs that something artificial makes its own decisions, that’s the point. That scares - just like everything unknown scares. It is important to understand that it is this decision that creates fear, not the technology. Nobody is afraid of the robot who has been taught to help me. Fear comes only up, when someone else has decided that the robot should spy on me. But that's not new, right?
We ourselves are just creating the unknown. We are the parents of this technology. Whether we are good or bad parents is not determined by AI. That does not mean that we can control AI, but that we ourselves should determine the purpose and shape the new actively to something positive.
I think it's also important to understand that while we are talking about AI many people think of a "strong AI" with self-developing and acting skills. But in fact nowadays we still have to deal with a "weak AI", which only reacts and follows its purpose given by us humans.

Andrea: That’s interesting, it makes sense. And it is coincidently calming and alarming. How could you calm down or enlighten the public additionally?

Heiko: I do not even want to calm down generally. In my view that would be fatal as it has always been fatal in history when people consider to be safe. Enlightenment is more likely to strike my concern. If people sit back thinking everything will be alright, they will be wrong at worst. But those who participate with own decisions they will determine how the world and the upcoming generations of AI will grow. And they don’t need to be afraid.
There are five aspects I like to expose:

UNCOMFORTABLE DECISIONS

We are (still) making the decisions. Leaning back means leaving the decisions to others. That's certainly comfortable, but then there arises that kind of AI of which one can be afraid (even if not necessarily). If decisions are not made by myself but by others, "I have to rely on it," "I do not care" or "I do not know about it." I myself am used to question things and to have a look behind the scenes to understand. Understanding creates trust and, even more important, it creates the ability to make one's own decisions. Of course sometimes this can be exhausting and isn’t comfortable...

WE ARE NOT THAT FAR YET

Artificial Intelligence is far from what many people imagine. It can already solve impressive tasks but is light years away from acting as a meaningful and differentiated human being. Should this succeed at some point, a strong AI has learned our values before. A strong AI will protect my home and spot potential intruders. And hopefully we have also succeeded in teaching it to open the door to the courier and to act positively instead of seeing enemies everywhere.

LECTURESHIP

Artificial Intelligence will develop something new and partly independent. But (still) it needs to be guided by us like a child has to go to school to learn things. If we are evil teachers, we will teach evil things - if we are good, we will produce a wonderful generation of friendly AI. Prohibitions tend to achieve nothing, because a child wants to learn and it will learn (even everything forbidden as it will get in contact to this by others). The better when it is able to check out what is good and beneficial.

AI AND WORK

Of course, AI will replace jobs and even completely eliminate some. Tractors have replaced horses, industrial manufacturing methods shoemakers and modern data centres some financial experts. And yet, there are still a lot of great jobs we can do and we have to do. This is how change works. It will always bring something new, something unknown today. Not during one day but during a transitional period. Everybody has to question himself what he wants to do and what makes him happy. If someone is happy to be a shoemaker it is fine. There are brilliant manufactures producing fantastic products. But with our knowledge about industrial production and the possibility of individualization of shoes with only one click he should check out if being a shoemaker is also economically viable and if his happiness is scalable. This is his own decision.

HUMAN-FIRST IN INNOVATION

I myself am convinced that the development of AI will improve the more human it becomes. People will always think about their acting and people are empathetic. A machine is not capable to both. A machine as in the ‚Tatort’ only kills, because it was ordered to. Future AI, however, may be able to rethink this order and to align it with what it was taught „at school“. And to make the better decision then.

After all, we have to deal with it like with everything new: If you want to trust the world, you need to travel the world. Get to know it and take the best for you. It's nothing different with AI.

Andrea: Yes, that's right. Remains the question how we can encourage people to do that...

Heiko: I believe that we have to think in terms of generations - each generation has different fears. And children, for example, have none, you just may not keep them completely away from everything. And as already mentioned, prohibitions don’t work.

Andrea: So we have to reach the parents?

Heiko: Them and also the schools. Maybe it would even work better if the schools start the education on AI because they reach the parents, too. And indeed, there are already quite exciting initiatives, such as the Haba Digital Workshop (https://www.digitalwerkstatt.de) or the Hacker School (https://www.hacker-school.de). In fact, in Hamburg for example there are a lot of offers ranging from 'Kids for IT' to 'Klickerkids' to 'Chaos School' by the Chaos Computer Club. But the problem is that these initiatives do not have any access to schools. A completely missed opportunity, because that would be a real benefit to the masses and would raise AI awareness in the right way.


This interview is part of a blog post series, starting with this blog post. Also read Minjoo’s, Karel’s and Stefanie’s answers.

Andrea Vorm Walde

PR and Communications

Andrea takes care of our PR and likes to show how innovative and versatile Indeed is. Actually, she is responsible for the words - nevertheless, she usually leaves us here first because we as designers and developers want to report authentically in our blog. And that’s what we can do best ourselves.

The
Mensch


click here for more:
NEWS, BLOG, EVENTS & THINKING