With its AI-powered chatbot, Meta was headed for disaster

The big vogue for “chatbots” in the mid-2010s seemed to be over. But on Friday, August 5, Meta reminded that work on this technology continued by presenting BlenderBot 3, its new “state-of-the-art chatbot“. According to the company, this text-based robot can “talk naturally with people“on”almost any subject“, a promise repeatedly made by the creators of chatbots, but never realized.

Still in prototype mode, BlenderBot 3 is available for free (US only at the moment) so that a large number of volunteer testers can make it progress through a discussion evaluation system. It has therefore been questioned extensively by the media and other curious people since it was posted online, and the initial assessment seems to be a sad chorus: BlenderBot 3 quickly torments Facebook, criticizes Zuckerberg’s style of dress, and then spins conspiratorial remarks, even anti- Semites. Just before launching the tool, Meta warns users that the chatbot “likely to make false or offensive statements“. But in his press release he specifies that he has introduced safeguards to filter out the worst from…

Meta’s chatbot, Meta’s first critic

BlenderBot’s goals are long-term. The researchers do not want to create a functional and marketable tool in the short term, they just want to improve the state of the art of chatbots. Specifically, their tool aims to integrate human conversational qualities (such as personality traits) into its responses. With a long-term memory, it must be able to adapt to the user as exchanges progress. In their press release, the researchers specify that BlenderBot will promote chatbots’ conversational abilities.avoid learning unnecessary or dangerous answers“.

The problem, as always, is that the chatbot will search the Internet for information to fuel the conversation. Except it doesn’t sort enough. Asked about leader Mark Zuckerberg he can answer, “he is a competent businessman, but his practices are not always ethical. It’s funny how he has all that money but still wears the same clothes!“, reports Business Insider. He does not hesitate to recall the myriad of scandals that have marred Facebook (and partly justified its change of identity) when it comes to its parent company. Or he says that his life is much better , since he deleted Facebook.

If the bot is so negative about Meta, it’s simply because it wants to tap into the most popular search results on Facebook, which tells the story of its backlash. By this operation it maintains a bias which turns out to be disadvantageous to its own creator. But these drives are not limited to fun projections, which poses a problem. To a journalist from The Wall Street JournalBlenderBot claimed that Donald Trump was still president and “would still be with his second term ending in 2024“. Thus, it forwards a conspiracy theory. To top it off, Vice states that BlenderBot’s response is only “generally neither realistic nor good“and that he”often changes the subject“brutal.

History repeats itself

These slips from the amusing to the dangerous have an air of deja vu. In 2016, Microsoft launched the Tay chatbot on Twitter, which was supposed to learn in real time from discussions with users. Failed: After a few hours, the textbot was relaying conspiracy theories as well as racist and sexist remarks. Less than 24 hours later, Microsoft pulled the plug on Tay and apologized profusely for the failure.

Meta has therefore attempted a similar approach, based on a massive learning model with more than 175 billion parameters. This algorithm was then trained on giant text databases (mostly publicly available), with the aim of extracting an understanding of language in mathematical form. For example, one of the datasets that the researchers had created contained 20,000 conversations on over 1,000 different topics.

The problem with these large models is that they reproduce the biases in the data they have been fed, often with a magnifying effect. And Meta was aware of these limitations: “Since all AI-powered conversational chatbots are known to sometimes mimic and generate dangerous, biased or offensive remarks, we conducted extensive research, hosted workshops and developed new techniques to create safeguards for BlenderBot 3. Despite this work, BlenderBot may still make rude or offensive comments, which is why we collect feedback.” Of course, the additional guarantees do not have the desired effect.

Faced with repeated failures of large language models and a long series of abandoned projects, the industry has returned to less ambitious but more effective chatbots. Thus, the majority of customer assistance bots today follow a pre-defined decision tree without ever leaving it, even if that means telling the customer they haven’t answered or directing them to a human operator. The technical challenge then becomes to understand the questions asked by the users and to bring the questions that are left without the most relevant answers.

Meta is transparent

While BlenderBot3’s success is more than questionable, Meta at least demonstrates a rare transparency, a quality usually lacking in AI-powered tools. The user can click on the answers from the chatbot to get the sources (in a more or less detailed way) about the origin of the information. In addition, researchers share their code, data, and model used to power the chatbot.

On Guardiana spokesperson for Meta also clarifies that “youAnyone using the Blender Bot is required to acknowledge that they understand that the discussion is for research and entertainment purposes only, that the bot may make false or offensive statements, and that they agree not to intentionally encourage the bot to make offensive statements.

In other words, BlenderBot reminded us that the ideal of sentient chatbots capable of expressing themselves like humans is still a long way off and that there are still many technical barriers to overcome. But Meta has taken sufficient precautions in its approach so that the story does not turn into a scandal this time.