Codemotion Amsterdam had some impressive keynotes. One of the more intriguing was titled “ and human identity”. The talk was given by Jarno Duursma, an author and TEDx speaker specialising in the human impact of modern technology. In the talk, he introduced us to the 3 ages of and gave us some tips for how to stay human as we enter the 3rd age, autonomous AI.
Jarno began by taking us back 20 years to when we could first experience what it was like to be in another country by watching a live webcam. He then fast-forwarded to Breakout, and the excitement he felt watching a computer teach itself how to play and win a game. Now, in 2019, we are surrounded by a sea of. Amazon Alexa, smartwatches, apps that can identify your food from a photo, intelligent autocorrect, the list goes on and on. Modern AI allows Shell to have a system that automatically identifies when people are smoking a cigarette too near to the petrol pumps. All this can be scary – it can seem like AI is developing superhuman capabilities. Are we really on the edge of Skynet and the war with the machines? Jarno doesn’t think so, he thinks we have reason to be optimistic if we strive to retain our humanity. But first, he wanted to take us through how we got to where we are.
The 3 ages of AI
Jarno has divided the development ofinto 3 ages: Business AI, Humanising AI and Autonomous AI. Let’s look at these in turn.
Business AI is all round us nowadays. When you book a hotel room on Booking.com the AI tries to manipulate you by creating false scarcity. “We only have one room left at this price.” “In the last 24 hours, 135 customers have viewed this hotel.” “This hotel has been booked 9 times in the last day.” Business AI is also used to make decisions about whether to sell us insurance, whether to give us a loan, or what products to recommend us. A good example here is Zest Finance, a micro-lender who discovered that customers who USE ALL CAPITALS are a higher risk for loans. They also discovered that customers who have low battery on their phone are less likely to repay loans.
AI has started to learn human abilities such as seeing, hearing, speaking and reading. AI is even being taught to understand emotions. Fortunately, there is no such thing as a general-purpose AI which can beat us in every ability. However, AU has started to excel in narrow fields and is slowly becoming more capable than we are.
In February it was widely reported that OpenAI had developed a system, GPT-2, that was so good at writing pieces text that they had chosen not to release it for fear it could be misused. The system works by taking a human prompt and then writing a large body of text about that. Impressively, having written the text, the system can also answer free-form questions about it.
Example of GPT-2 in action. From the OpenAI website.
Listening and recognising emotions
Modern call centres often use AIs to coach and help call handlers. These AIs are able to listen to the customer, identify what they are discussing and then give helpful prompts to the call handler. The systems can also listen to the tone of voice, speed of measuring how they type on their .and even tiredness of the call handler and coach them to come across better on the call. Similarly, you can detect someone’s emotional state by
Jarno then introduced us to AIPoly, who have developed an app to help blind and visually impaired people by letting their phone act as their eyes. The system can identify a large number of common items including 900 types of food. It can also read common phrases in 20 languages. It is also able to describe scenes and to identify simple actions a person is performing, such as drinking water, picking up a pen, etc.
A year ago, Duplex, an AI system that is able to hold natural conversations with people over the telephone. As an example, it can be used to book a restaurant or a hairdresser. At the time some people dismissed it as marketing hype, but since then a number of respected journalists have tested the system and found it really works.unveiled
Despite recent advances, Jarno thinks AI assistants are still in their infancy. As he put it, they are currently like a 2003 Blackberry, but one day they will become an iPhone X. AI assistants are getting better and better at understanding what we say and at answering us. Soon, systems like Alexa forwill truly replace human assistants. And before long, websites will ditch the menu system in favour of an interactive AI assistant to lead you through the site.
The 3rd age of AI is autonomous AI, which Jarno has branded the Digital Butler in his new book on the subject. He calls it the digital butler because this is the point when AI starts to take decisions on our behalf. We are already well on our way to this point. It’s not hard to see how the Google Duplex system can be extended to include autonomously selecting a restaurant based on your known preferences.
Where things get scary is when AI starts making preemptive decisions and choices for us. When it reads and answers our email without our input, where it automatically shuffles entries in our calendar. There are clear positives to all this. For instance, a digital butler could automatically update our phone and utility contracts to ensure we always have the best deal. By simply taking a photo of a recipe, the digital butler can check your fridge and store cupboard for ingredients and order anything you are missing.
But Jarno is afraid that our appetite for novelty and convenience will mean we start to forget the downside. As an example, he presented us with a deep-fake video of Steve Buscemi’s face onto a video of Jennifer Lawrence at the Golden Globes. He also revealed that people are already being blackmailed using fake videos that appear to show them in compromising positions. The rise of so-called generative adversarial networks means we should no longer trust what we see and hear.
As a static image, this just looks like a (bad) photoshop. But try watching the video…
The biggest worry is that we will just become accepting of fakes like this.
We are already seeing issues relating to algorithmic. The problem is that the datasets used to make the decisions are biased, and often this is reinforced. But worse than that is the fact that ordinary members of the public trust a decision made by a machine far more than one made by a human! As Jarno put it: “It’s like blockchain – everyone thinks everything on blockchain is true, but it isn’t.”
Emotion detection becomes another issue. Now we are seeing systems react to our perceived emotions. But while facial emotion recognition has become quite accurate, life isn’t like a Disney film. The face is not a window onto the soul.
Above all, these systems are depriving us of the opportunity to be personal. Our tastes can be and are being manipulated. Tinder finds us our life partner. Amazon finds us a present for our family.determines what news to show us. Google Maps dictates our route and even our choice of restaurant. The upshot is we have become less tolerant of inner convenience. We don’t have a chance to get bored, upset or angry. And we start to believe the world revolves around us.
Fortunately, there is an easy solution to this. Introspection. We should examine ourselves and embrace both the light and dark sides of our character. We should celebrate the talents and passions that distinguish us. We should nurture the things that make us human: empathy, warmth, and compassion. These are things that are uniquely human and no AI is capable of even emulating. We should try to understand what drives us and try to stay agile and maintain our inner fulfilment.
Equally, we should accept our negative emotions. As a teenager, Jarno couldn’t cope with fear and anger. But now he knows we need these dark emotions like anger, jealousy and fear. Dealing with discomfort is how we grow as a person. By reflecting on our social and emotional intelligence we can become stronger. This is what distinguishes us from the machines, and it makes us less sensitive to the manipulations of the digital butler.