- February 14, 2023
- Posted by: Shalini W
- Categories: Information Technology, Search Engine Optimization
Artificial intelligence (AI) is coming to Internet search, months after the chatbot ChatGPT stunned the world with its remarkable ability to write essays and respond to inquiries like a human. Google, Bing, and Baidu, three of the world’s largest search engines, announced last week that they will integrate ChatGPT or similar technology into their search products, allowing users to get direct answers or engage in a conversation rather than simply receiving a list of links after typing in a query. How will this affect how people interact with search engines? Exist hazards associated with this sort of human–machine interaction?
Bing employs the same technology as ChatGPT, which was created by OpenAI in San Francisco, California. However, all three businesses use big language models (LLMs). LLMs generate compelling sentences by emulating the statistical patterns of text seen in a large database. Bard, Google’s AI-powered search engine, was unveiled on February 6 and is now being tested by a select group. Currently, Microsoft’s version is widely accessible, however there is a waiting list for unrestricted access. In March, Baidu’s ERNIE Bot will be released.
A few smaller startups have also built AI-powered search engines prior to these announcements. Aravind Srinivas, a computer scientist in San Francisco who co-founded Perplexity, an LLM-based search engine that provides answers in conversational English, says: “Search engines are evolving into this new state where you can actually start talking to them and conversing with them like you would with a friend.”
Compared to a traditional Internet search, the very personal aspect of a discussion may serve to influence impressions of search results. Aleksandra Urman, a computational social scientist at the University of Zurich in Switzerland, hypothesizes that people are more likely to believe the responses of a chatbot that engages in conversation than those of a search engine.
A team from the University of Florida in Gainesville discovered in a 2022 study1 that for participants interacting with chatbots employed by corporations such as Amazon and Best Buy, the more human-like the interaction appeared, the greater their trust in the organization.
That could be advantageous, making searches more efficient and quick. Inasmuch as AI chatbots are fallible, a heightened sense of trust could prove detrimental. Google’s Bard boldly responded wrongly to a question concerning the James Webb Space Telescope during its own technology demonstration. And ChatGPT has a tendency to generate fictitious responses to inquiries for which it has no answer – a phenomenon known as hallucination in the field.
A Google representative stated that Bard’s blunder “highlights the necessity of a rigorous testing procedure, which we are launching this week with our trusted-tester programmed.” However, some believe that such inaccuracies, presuming they are discovered, could cause consumers to lose trust in chat-based search rather than increase it. “Early perception can have a significant impact,” says Sridhar Ramaswamy, CEO of Neeva, an LLM-powered search engine founded in January and based in Mountain View, California. The error wiped $100 billion off Google’s valuation as investors dumped stock out of fear for the company’s future.
Transparency is lacking
The problem of inaccuracy is exacerbated by a comparative lack of transparency. Typically, search engines give consumers with their sources – a list of links — and allow them to determine which sources they may trust. On the other hand, it is rarely known what data an LLM was trained on; was it the Encyclopedia Britannica or a gossip blog?
“It is absolutely opaque how [AI-powered search] will operate, which might have significant consequences if the language model misfires, hallucinates, or transmits false information,” explains Urman.
Urman asserts that if search bots make a sufficient number of errors, they have the potential to undermine users’ beliefs of search engines as impartial arbiters of truth, as opposed to fostering trust through their conversational skills.
She has undertaken unpublished studies indicating that present levels of trust are high. She investigated how people perceive existing features that Google uses to enhance the search experience, such as ‘featured snippets,’ in which an excerpt from a page deemed particularly relevant to the search appears above the link, and ‘knowledge panels,’ which Google generates automatically in response to searches about, for example, a person or organization. Nearly 80% of respondents to Urman’s survey regarded these characteristics accurate, and roughly 70% believed they were objective.
Chatbot-powered search blurs the line between humans and robots, according to Giada Pistilli, chief ethicist at Hugging Face, a Paris-based data-science platform that advocates the responsible use of artificial intelligence. She is concerned about the rate at which businesses are adopting AI advancements: “These new technologies are constantly thrust upon us without any control or training framework to know how to handle them.”
Read More: Twetch likes Bitcoin’s NFT hype-
November 30, 2023
November 25, 2023
October 6, 2023
September 29, 2023