Around the middle of the month, FastCo Design had reported about two AI agents designed by Facebook who invented their own gibberish language and were subsequently turned off by the company. This was quickly taken up and reported as the advent of the machines and various other variants of gloom and doom in which AI will take over human lives.
A snippet of the conversation,
Bob: “I can can I I everything else.”
Alice: “Balls have zero to me to me to me to me to me to me to me to me to.”
Fortunately, Facebook’s AI Research (FAIR) unit had posted a blog entry on the topic last month which explained the purpose behind these two AI agents. The purpose behind these AI was to showcase that it is “possible for dialog agents with differing goals (implemented as end-to-end-trained neural networks) to engage in start-to-finish negotiations with other bots or people while arriving at common decisions or outcomes.” Given a certain set of tasks which in this case was the discussion of how to barter for certain goods (in this case books, hats and balls) till both parties are agreeable.
The end goal was to get a chatbot (which are fairly commonplace) which could learn from human interaction to negotiate deals seamlessly with an end user in a manner that said user would not realise they are interacting with a bot. This goal was met according to FAIR. Now the catch in this whole system was that the bots were not incentivised to use English or some other human comprehensible language and thus they fell back to a gibberish language that was best suited to their tasks which bereft of context does come off as creepy.
“Agents will drift off understandable language and invent code words for themselves. Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”