From government documents to news reports, commerce, music and social interactions, much of the world’s information is now online. And Google, found

Google’s use of AI to power search shows its problematic approach to organizing information

submited by
Style Pass
2024-06-12 05:00:05

From government documents to news reports, commerce, music and social interactions, much of the world’s information is now online. And Google, founded in 1998 with the mission “to organize the world’s information and make it universally accessible and useful,” is the way we access this torrent of knowledge and culture.

In April 2024, Google’s search engine accounted for 90 per cent of the Canadian search market. For academics, its specialized Google Scholar and Google Books are mainstays of our research lives.

However, while Google Search is essential infrastructure, Google itself is recklessly sabotaging it in socially damaging ways that demand a strong regulatory response.

On May 14, Google announced it was revamping its core search website to include a central place for generative AI content, with the goal of “reimagining” search. One of its first rollouts, AI Overviews, is a chatbot that uses a large language model (LLM) to produce authoritative-sounding responses to questions rather than users having to click away to another website.

OpenAI’s launch of ChatGPT in November 2022 ignited the generative AI frenzy. But by now, most users should be aware that LLM-powered chatbots are unreliable sources of information. This is because they are merely high-powered pattern recognition machines. The output they generate in response to a query is generated via probability: each word or part of an image is selected based on the likelihood that it appears in a similar image or phrase in its database.

Leave a Comment