The artificial intelligence used in search engines has repeatedly demonstrated serious problems in the accuracy of working with news sources. According to a recent study of the Columbal Journalism Review Center for Columbia Journalism Review, eight popular Shi-search services based on generative artificial intelligence provide false answers to queries for news sources in more than 60% of cases.
Search systems based on AI, such as Perplexity Pro and Grok 3, have been criticized for often providing plausible but incorrect data, including inaccurate headlines, sources and URL. Moreover, the paid versions of these models have proven to be less accurate than free options, which only increases users' concern.
These problems are even more aggravated by the fact that some SI platforms, such as Perplexity, ignore directives that forbid scanning certain web resources. This can lead to a violation of intellectual property rights and the uncontrolled spread of false information. One of the striking examples - Perplexity incorrectly identified National Geographic materials, despite a clear restriction on access to these resources.
Naturally, these problems did not go unnoticed among publishers. They are faced with new challenges: blocking the SI systems of their resources can worsen their position, and opening access leads to significant losses in the number of visitors. The Chief Operational Director of Time expressed the hope that over time, artificial intelligence technologies will improve and become more reliable.
In response to the scandal, Openai and Microsoft confirmed the presence of such errors in their systems and promised to work on their correction. However, the reliability and accuracy of the AI in search engines remains relevant, and its solution may require considerable efforts from developers and human rights organizations.