The TRUTH behind why Facebook shut down their AI


News

Unless you have managed to avoid news from both old, new and social media in the past few weeks, then you will have seen a story circulating… A story with increasingly panicky headlines about Artificial Intelligence.

Facebook’s AI technology had begun to develop its own ‘language’, leading Facebook to make the decision to halt the project, and many journalists to write some worrying headlines:

Facebook shuts down AI after it invents its own creepy language,” – The Daily Dot site

Did we humans just create Frankenstein?” – International Business Times site

The Sun even used a quote from a robotics professor saying this “could be lethal” if similar technology was used in military robots (which seems a little obvious when you think about it), and other articles drawing on references to popular films, books and television where human life is exterminated by AI.


Science NON-fiction?

We have grown up in a world with Sci-Fi tales of impending Skynet-esque disasters, Voight-Kampff machines to determine if someone is not human and instead a ‘replicant’. We even have the real-life ‘three laws of robotics’ from Issac Asimov, or the Turing test from the 1950s (where a machine’s ability to show intelligent behaviour is tested to see if it is equivalent to, or indistinguishable from, that of a human). It is therefore natural to see such headlines and worry that human-kind is about to come to a bleak, dystopian end.

 

To add to the scare-factor, most articles have been iced with the following (and admittedly, very creepy) passage of text between two Facebook chatbots:

 

Bob   I can i i everything else
Alice   balls have zero to me to me to me to me to me to me to me to me to
Bob   you i everything else
Alice   balls have a ball to me to me to me to me to me to me to me to me

 

In a story by Philip K. Dick or Arthur C. Clarke, this snippet of text would be enough to begin an AI revolution which will inevitably take over humanity. Luckily, however, the reality is slightly less romanticised and more prosaic.

 


The language of the bots

Our two bots, Bob and Alice, were designed to show that it is “possible for dialog agents with differing goals… to engage in start-to-finish negotiations with other bots or people while arriving at common decisions or outcomes.” (See the blog here). This means that they were made for the purpose of developing software for negotiations, which is EXACTLY what they were doing!

Facebook aimed to develop a chatbot that would negotiate deals with an end user so fluently that said user would not realise they were talking with a robot.

This would be done through the bot learning from human interaction. Unfortunately, as Bob and Alice are categorically NOT human (the Turing test is not needed here!), they could not learn human nuances of language and communication, leading them to begin to derive a kind of ‘bot shorthand’ – or as some are reporting, a separate ‘language’.

Bob and Alice had no incentive to communicate according to human-comprehensible rules of the English language. Essentially, with the lack of any other influence, human or not, the two bots used their finite resources to communicate in a way that can be broken down and perceived as ‘chatting’ to negotiate, as they were programmed to do. Neither ‘knew’ it was talking to another bot as they did not have consciousness, only programming. It may look incredibly creepy, but it is simply this.

 

So, what did the text mean?

The chatbots were programmed with a range of given lexicon – which were represented in the user interface as innocuous items like books, hats, and (rather comically considering the serious nature) balls.

This means that the text just shows two chatbots successfully discussing how to mutually and agreeably split some balls.

How can the text be understood?

Codewords, shorthand, and slang are an important part of communication.

Every social group has their own way of talking about things. For example, you would not introduce yourself in a job interview in the same manner that you would introduce yourself on a blind date (well, let’s hope not anyway!)… and I’m sure everyone has worked somewhere where the dreaded acronyms appear and, as the newbie, you don’t understand what they mean yet? But you learn. Humans naturally use these methods to communicate socially in a way that they can be most easily understood in their situation. This is exactly what the bots did.

One article stated how there was ‘obvious danger’ in the ‘conversation’ between the bots, due to it being ‘impossible to understand’. This is arguably incorrect. Linguists are trained to decipher languages and communications that have never been seen before, from tribes deep in the Amazon, to ancient text carved on stone. This ‘language’ is merely what the creators have given the bots, along with an algorithm to develop that communication in line with the rhythms of a normal, English-speaking, conversation. There are no new symbols, the lexis’ meanings have all remained the same, and the syntax is easily understandable. In the state it was in, therefore, the ‘language’ can most certainly be controlled – and that’s what Facebook did, controlled it by shutting it down.

Ancient Albanian (GjuhaShqipe)
Ancient Greek

 

The argument and worry should not come from the fact that the bots were having a successful conversation… this was in fact the objective of the bots, however, as this outcome was a new discovery, Facebook made the sensible decision to halt the negotiation and learn more before continuing. We cannot speculate about what Facebook are currently doing to move the project along (even if this means never repeating the experiment and simply writing a report), but the key point is that it these bots are and were harmless compared to the sensationalist headlines and articles seen.

 


 What is next for the bots?

Whilst it is lovely to hear that Bob and Alice, the two bots in question, were not trying to take over the world, there are arguably still very good reasons to not facilitate intelligent machines develop their own language – especially one which, whilst can be broken down and understood in context (i.e. negotiating), we could potentially not meaningfully and quickly understand.

Imagine the incredible difficulty to try and debug a system like this if it went wrong, for example… but, again, this situation is far away with the the AI from our Sci-Fi minds.

Facebook has stated that they shut down the conversation because they have no interest in the bots talking to each other, only to humans. Obviously, we can never know how true this is, but the fact is that they HAVE shut it down.

What has happened is a small phenomena arising from two intelligent machines learning from one another, and it has been stopped.

In this case, rather than creating Frankenstein or iRobot, all the bots were capable of doing was becoming more efficient in trading each other’s balls.