Facebook AI Experiment Shutdown Holds Lessons For IT Industry
Facebook’s reasons for shutting down an AI-based chatbot were more mundane than it first appeared
“The researchers had the systems stop creating their own language because it wasn’t what they set out to investigate and it was affecting the parameters of their study,” a Facebook spokesperson explained to eWEEK. The spokesperson stressed that the AI process that was shut down was an experimental system, not production software.
But the study did turn up some interesting and potentially useful information, perhaps the most important being that when the agents were communicating with humans in an actual negotiation session, the humans couldn’t tell that they were talking to a robot. This is important because it demonstrates that these chatbots can determine a desired outcome, and work to realize it.
But there’s also an important lesson for IT managers, now that machine learning is becoming prevalent. As machine learning and other AI characteristics become part of your critical systems, the single most important activity as you integrate them is to test them thoroughly.
That means testing with more than the expected parameters. You must test the response of your AI systems with wildly divergent data and you must test it with information that’s simply wrong. After all, if you’re expecting input from humans, at some point they’re going to make a mistake.
In addition, you must also develop a means of monitoring what’s happening when your AI system is receiving input or providing output to other systems. It’s not so much that having your machines create their own language is a problem as it is that you need to be able to audit the results. To audit the results, you need to understand what they’re up to.
Finally, deep down inside, AI agents need to be instructed to speak English all the time—not just when it thinks the humans are listening.