Fear is good. Fear is essential to our survival and has been key to human evolution. And fear is imperative to spark creativity because it empowers courage for the unknown and new.
But let us not be naïve. All the blogs (this one also, unfortunately), media coverage, books and conferences on AI are unread and unnoticed by the larger majority of humans. I call them “the other 70%”.
Whether you think: AI is just a new technology we will master, or: a short step away from creating gods that will rule us all- in order to have optimism we have to engage the other 70%, too. This must happen top down and bottom up.
Biotechnology and nuclear technology for instance are highly regulated by most governments. Because their impact on society is huge, governments set ethical and other standards (such as laws) to protect their citizens. Although laws take a while to be passed, they are one of our most powerful tools to shape society. We need standards for AI, too. At least as specific and far reaching as with the other two, probably even more. This responsibility has to be fulfilled by todays politicians and state leaders, top down and swiftly.
But how do we engage the others, the masses?
Well, what if companies whether small, family owned or large corporations in their quest to master digital transformation would expose every single employee to AI? Would have them work on AI driven projects and initiatives, hence empower them with firsthand experience and knowledge? Wouldn’t this be the most effective and quickest way to engage the civil discussion on AI? Based on personal experience not on fake news.
For sure there will be people becoming or remaining afraid of AI. They will perpetuate the quest for the next and the new.
– given in an interview led by Andrea
Andrea: Hello Heiko! Thanks for taking the time to think about my question. Your thoughts are important to me as you are frequent speaker at conferences and who is therefore in contact to ‚the people on the streets/out there’.
Heiko: True. However, this is more a specialist audience than people on the streets. And there are usually two camps: those who deal with morality, ethics and society are questioning the topic. The business audience in contrast sees immense business potential, that is assumed to be implementable directly with AI and to produce cash immediately.
Andrea: Would you agree that you, the creatives, bear some responsibility?
Heiko: It is a matter of fact that everything what happens within AI development is driven by people. And exactly this is what the fear is based on – namely that people who want to do something bad, no matter how each one defines it, teach the AI to do those ‘evil things’. We creatives help shaping and developing progress and innovation and of course it is up to us to constantly create “the good” and to be “good teachers” for AI. And there are a lot of good developments in this area yet.
Andrea: Could you give us some examples?
Heiko: Yes of course: Aipoly for example has started with real time object recognition for visually impaired people and now continues to expand into trade: https://www.aipoly.com/
Waverly will support our communication: They are developing in-ear buds that translate almost in real-time so that people with different languages will be able to talk to each other quite naturally: https://www.waverlylabs.com/
Andrea: This sounds promising! And indeed, these are examples for new technologies being a huge help for humans.
Heiko: It is equal to all new stuff: we designers do not create a great app just because it is in demand and/or feasible. But we design for a specific purpose. And that is crucial: you can use a rocket as a nuclear weapon and kill people. But it can also be used as a carrier of a space capsule with which astronauts explore the universe and new worlds. A good purpose will produce good Artificial Intelligence. The only thing we really need to be afraid of is ourselves.
Andrea: But I feel that it is more the fear of technology what comes to my ears.
Heiko: It transcends our beliefs that something artificial makes its own decisions, that’s the point. That scares – just like everything unknown scares. It is important to understand that it is this decision that creates fear, not the technology. Nobody is afraid of the robot who has been taught to help me. Fear comes only up, when someone else has decided that the robot should spy on me. But that’s not new, right?
We ourselves are just creating the unknown. We are the parents of this technology. Whether we are good or bad parents is not determined by AI. That does not mean that we can control AI, but that we ourselves should determine the purpose and shape the new actively to something positive.
I think it’s also important to understand that while we are talking about AI many people think of a “strong AI” with self-developing and acting skills. But in fact nowadays we still have to deal with a “weak AI”, which only reacts and follows its purpose given by us humans.
Andrea: That’s interesting, it makes sense. And it is coincidently calming and alarming. How could you calm down or enlighten the public additionally?
Heiko: I do not even want to calm down generally. In my view that would be fatal as it has always been fatal in history when people consider to be safe. Enlightenment is more likely to strike my concern. If people sit back thinking everything will be alright, they will be wrong at worst. But those who participate with own decisions they will determine how the world and the upcoming generations of AI will grow. And they don’t need to be afraid.
There are five aspects I like to expose:
We are (still) making the decisions. Leaning back means leaving the decisions to others. That’s certainly comfortable, but then there arises that kind of AI of which one can be afraid (even if not necessarily). If decisions are not made by myself but by others, “I have to rely on it,” “I do not care” or “I do not know about it.” I myself am used to question things and to have a look behind the scenes to understand. Understanding creates trust and, even more important, it creates the ability to make one’s own decisions. Of course sometimes this can be exhausting and isn’t comfortable…
WE ARE NOT THAT FAR YET
Artificial Intelligence is far from what many people imagine. It can already solve impressive tasks but is light years away from acting as a meaningful and differentiated human being. Should this succeed at some point, a strong AI has learned our values before. A strong AI will protect my home and spot potential intruders. And hopefully we have also succeeded in teaching it to open the door to the courier and to act positively instead of seeing enemies everywhere.
Artificial Intelligence will develop something new and partly independent. But (still) it needs to be guided by us like a child has to go to school to learn things. If we are evil teachers, we will teach evil things – if we are good, we will produce a wonderful generation of friendly AI. Prohibitions tend to achieve nothing, because a child wants to learn and it will learn (even everything forbidden as it will get in contact to this by others). The better when it is able to check out what is good and beneficial.
AI AND WORK
Of course, AI will replace jobs and even completely eliminate some. Tractors have replaced horses, industrial manufacturing methods shoemakers and modern data centres some financial experts. And yet, there are still a lot of great jobs we can do and we have to do. This is how change works. It will always bring something new, something unknown today. Not during one day but during a transitional period. Everybody has to question himself what he wants to do and what makes him happy. If someone is happy to be a shoemaker it is fine. There are brilliant manufactures producing fantastic products. But with our knowledge about industrial production and the possibility of individualization of shoes with only one click he should check out if being a shoemaker is also economically viable and if his happiness is scalable. This is his own decision.
HUMAN-FIRST IN INNOVATION
I myself am convinced that the development of AI will improve the more human it becomes. People will always think about their acting and people are empathetic. A machine is not capable to both. A machine as in the ‚Tatort’ only kills, because it was ordered to. Future AI, however, may be able to rethink this order and to align it with what it was taught „at school“. And to make the better decision then.
After all, we have to deal with it like with everything new: If you want to trust the world, you need to travel the world. Get to know it and take the best for you. It’s nothing different with AI.
Andrea: Yes, that’s right. Remains the question how we can encourage people to do that…
Heiko: I believe that we have to think in terms of generations – each generation has different fears. And children, for example, have none, you just may not keep them completely away from everything. And as already mentioned, prohibitions don’t work.
Andrea: So we have to reach the parents?
Heiko: Them and also the schools. Maybe it would even work better if the schools start the education on AI because they reach the parents, too. And indeed, there are already quite exciting initiatives, such as the Haba Digital Workshop (https://www.digitalwerkstatt.de) or the Hacker School (https://www.hacker-school.de). In fact, in Hamburg for example there are a lot of offers ranging from ‘Kids for IT’ to ‘Klickerkids’ to ‘Chaos School’ by the Chaos Computer Club. But the problem is that these initiatives do not have any access to schools. A completely missed opportunity, because that would be a real benefit to the masses and would raise AI awareness in the right way.
The public opinions related to the fear of AI are mostly grounded in misconceptions of what AI is. For many people, talking about AI is like talking about aliens. Therefore, let’s start with the “why”: why people get scared? Because most people don’t know what is AI. The lack of knowledge is what makes us feel the fear – same as with the aliens, right?
However, some people might disagree because even someone like Elon Musk -who has exposure to the very cutting-edge AI- has publicly claimed that his biggest fear now is AI’s evolution. Following this statement, some people freaked out! However, then, what happens? Once the talk is over, they leave the room and go back to their laptop, and they start typing questions and waiting for a chat-box to answer. Funny enough, the chatbot is a primary form of AI, but nobody calls it AI, so no one is scared. People are already interacting with simple types of AI. Thus, if you’re afraid of Artificial Intelligence, you should know that it’s already somewhere.
The lack of knowledge generates a sort of fear towards something that we don’t know yet. At the very moment, the public opinion of AI is so abstract that everyone can only get a feeling of it, but not a clear understanding. It’s like the fear of the dark. Kids get scared because it makes them feeling unarmed and not in control.
At this point comes the mass-media promoting a cartoon-ish image of AI. “Bad news confuses, confusion stimulates panic,” wrote JFC Fuller. Think about some fruit-juices in many supermarkets. You buy them because there’s a juicy orange printed on the box, and it clearly states “orange juice”; but, if the box would show the picture of the chemical molecules that are really contained inside, while the claim says “mix of industrial and chemical elements with the flavour of orange”, would you still buy it? So, that’s the same case. If I’d show to people the Terminator and Robocop while telling them that those steel-made-guys will populate the planet very soon with the flag of AI, what reactions do we expect? Manuel Cebrian -a research manager at MIT Media Lab- studied the reactions: “We are interested in how AI induces emotions — fear, in this particular case”. And he defined two categories: the idea that AI will become a conscious identity trying to destroy us, and the belief that some immoral people will use AI for evil purposes.
By definition, curiosity is “a strong desire to know or learn something”. In fact, “Curiosity” is also the name that the NASA used for a machine designed to explore Mars.
If people could put aside their fears of hostile AI, there will be more openness to explore the edges of this technology. Many examples of AI are already next to all of us, right now: Siri, Alexa, Netflix, Facebook, and so on. Even more, other applications of AI makes it beautifully helpful for us: image-recognition algorithms to help dermatologists identify potential cancers, facial recognition software to fight the overfishing in a bid to save the ocean, machine learning to predict earthquakes, drones to scan vast areas on the coast owned by oil and gas companies to avoid contaminating the sea.
I believe, we can enable people to be informed about this technology, but at this stage, we can’t foresee its exact evolution. For this reason, the examination in this article is limited to exploring the first area related to the “fear”, and so aiming to empower everyone with the freedom to judge based on individual opinions. My hope is that the public opinion will find stability on a middle ground, because a neutral and rational observation of such a powerful technology would avoid the opposite mistake, which is an excess of trust that already lead many –nowadays- to a misuse of the current technologies, in the private life as well as on a large scale.
Lastly, people like Elon Musk are right, there is undoubtedly something to be scared, but is not the AI in its general meaning. Instead, it will be a specific form of AI, or it will be a particular product or service coming out of this technology. That’s something that we can’t predict, and that we can’t judge yet. Same thinking goes for social media, the internet, the phones and for any other technology. Some humans applied these technologies correctly, but others didn’t. In the future, we will become more conscious of AI. Meanwhile, we can only define its ethical guidelines.
If you are interested in Andrea´s, Steffi´s and Minjoo´s thoughts, you should also read the first part of this blog series.