Chatbots could one day replace search engines. Here’s why it’s a terrible idea.

Bender is not against using language models for Q&A exchanges in any case. She has a Google Assistant in her kitchen, which she uses to convert units of measurement in a recipe. “There are times when it’s really convenient to be able to use voice to access information,” she says.

But Shah and Bender also give a more disturbing example that surfaced last year, when Google answered the question “What is the ugliest language in India?” with the excerpt “The answer is Kannada, a language spoken by approximately 40 million people in southern India.”

No easy answers

There is a dilemma here. Direct answers can be convenient, but they’re also often incorrect, irrelevant, or offensive. They can mask the complexity of the real world, says Benno Stein of Bauhaus University in Weimar, Germany.

In 2020, Stein and his colleagues Martin Potthast of the University of Leipzig and Matthias Hagen of Martin Luther University in Halle-Wittenberg, Germany, published an article highlighting the problems with direct answers. “The answer to most questions is ‘It depends’,” says Matthias. “It’s hard to convey to someone who is searching.”

Stein and his colleagues consider that search technologies have moved from organizing and filtering information, through techniques such as providing a list of documents corresponding to a search query, to formulating recommendations under the form of a single answer to a question. And they think that’s a step too far.

Again, the problem is not the limitations of existing technology. Even with perfect technology, we wouldn’t get perfect answers, Stein says: “We don’t know what a good answer is because the world is complex, but we stop thinking that when we see these straightforward answers. “

Shah agrees. Giving people a one-size-fits-all answer can be problematic because the sources of that information and any disagreements between them are hidden, he says: “It really depends on our complete trust in those systems.”

Shah and Bender offer a number of solutions to the problems they anticipate. In general, search technologies need to support the many different ways people use search engines today, many of which are not served by direct answers. People often use search to explore topics they may not even have specific questions about, Shah says. In this case, simply proposing a list of documents would be more useful.

It should be clear where the information is coming from, especially if an AI is pulling stuff from multiple sources. Some voice assistants already do this, prefixing a response with “Here’s what I found on Wikipedia,” for example. Future search tools should also have the ability to say “That’s a dumb question,” Shah says. This would help the technology avoid repeating offensive or biased premises in a query.

Stein suggests that AI-powered search engines could present the reasons for their answers, giving pros and cons from different points of view.

However, many of these suggestions only underscore the dilemma identified by Stein and his colleagues. Anything that reduces convenience will be less appealing to the majority of users. “If you don’t click on the second page of Google results, you won’t want to read different arguments,” Stein says.

Google says it is aware of many of the issues raised by these researchers and is working hard to develop technology that people find useful. But Google is the developer of a multi-billion dollar service. Ultimately, it will build the tools that appeal to the most people.

Stein hopes it won’t all come down to convenience. “Research is so important to us, to society,” he says.

Rosemary S. Bishop